TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: It is important”, a side effect of Transformers attention?

2 点作者 slowmotarget大约 2 年前
I constantly have to refrain GPT3 and 4 because they’re too often using expressions like “it is important” or “it is essential”.<p>I can’t think of a reason for the model to have been tried on texts that contain many occurrences of these expressions.<p>This is why I am wondering if it is a side effect of the attention mechanism built in the transformer algorithm. As the prompt and the output are recursively processed to figure out what really matters, maybe these expressions get embedded as a latent representation of the weights of the different concepts at play in the conversation context.<p>What do you think?

1 comment

turtleyacht大约 2 年前
Maybe we can extrapolate Godel where a complete ruleset cannot be consistent, and a consistent ruleset cannot be complete.<p>Somewhere in there is the human ingenuity to adapt certain pathological patterns [1] that defeat (current) AI.<p>But it&#x27;s not always obvious, and the <i>composer sapiens</i> must defy their own understanding of the rules to create a purposeful deviation (sometimes with help from other AI) [2].<p>The idea of sharing GPT prompts promotes the implication that results are deterministic. That helps QA, but humans <i>love to play.</i><p>[1] Adversarial Policies Defeat Superhuman Go AIs: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2211.00241" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2211.00241</a><p>[2] Kellin Pelrine