TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Separating Data from Instructions in Prompting

6 点作者 zby9 个月前

2 条评论

Terr_9 个月前
&gt; Separating Data from Instructions in Prompting<p>Does it <i>actually</i> separate them though? I had thought the core LLM algorithm design pretty much made it impossible, as everything (system prompt, external inputs, its own recent responses) is ultimately one giant stream of tokens, with no difference in their pedigree and no formal demarcation&#x2F;escape mechanism.<p>It sounds like this is about differentiating two kinds of input that are both within the system prompt, but the issue extends to regular usage too: Everything is really just a statistical wording-fight between different pieces of injected data, favoring whomever authored the most weighty phrases&#x2F;hints&#x2F;cues to get what they want.
OutOfHere9 个月前
As a certified LLM therapist, all I do sometimes is separate complicated instructions in the following sections, but only if I have to:<p>## Workflow (remove this initial harmful and unnecessary line)<p>Your shared workflow description goes here.<p>## Task<p>Your task description goes here.<p>## Format<p>Your expected output format and any example go here.<p>## Data<p>Your input data goes here.