TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Researchers describe how to tell if ChatGPT is confabulating

56 点作者 glymor11 个月前

9 条评论

derefr11 个月前
&gt; But perhaps the simplest explanation is that an LLM doesn&#x27;t recognize what constitutes a correct answer but is compelled to provide one<p>Why <i>is</i> it compelled to provide one, anyway?<p>Which is to say, why is the output of each model layer a raw softmax — thus discarding knowledge of the confidence each layer of the model had in its output?<p>Why not instead have the output of each layer be e.g. softmax but rescaled by min(max(pre-softmax vector), 1.0)? Such that layers that would output higher than 1.0 just get softmax&#x27;ed normally; but layers that would output all &quot;low-confidence&quot; results (a vector all lower than 1.0) preserve the low-confidence in the output — allowing later decoder layers to use that info to build I-refuse-to-answer-because-I-don&#x27;t-know text?
评论 #40757165 未加载
评论 #40757387 未加载
评论 #40757657 未加载
dawatchusay11 个月前
Is confabulation different from hallucination? If not I do suppose this is a more accurate term for the phenomenon except that the exact definition isn’t common sense without looking it up whereas “hallucination” is more widely understood.
评论 #40756092 未加载
评论 #40758143 未加载
评论 #40756219 未加载
评论 #40759127 未加载
glymor11 个月前
TL;DR sample the top N results from the LLM and use traditional NLP to extract factoids, if the LLM is confabulating the factoids would have random distribution, but if it&#x27;s not it will be heavily weighted towards one answer.<p>A figure from the paper shows this better than my TL;DR: <a href="https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;s41586-024-07421-0&#x2F;figures&#x2F;1" rel="nofollow">https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;s41586-024-07421-0&#x2F;figures&#x2F;1</a>
评论 #40757320 未加载
评论 #40762910 未加载
评论 #40756952 未加载
lokimedes11 个月前
What we lack is for these models to state their context for their response.<p>We have focused on the inherent lack of input context, leading to wrong conclusions, but what about that 90B+ parameters universe, plenty of room for multiple contexts to associate any input to surprising pathways.<p>In the olden days of MLPs we had the same problem with softmax basically squeezing N output scores into a normalized “probability”, where each output neuron actually was the sum of multiple weighted paths, which one winning the softmax made up the “true” answer, but there may as well have been two equally likely outcomes, with just the internal “context” as difference. In physics we have the path integral interpretation and I dare say, we humans too, may provide outputs that are shaped by our inner context.
zmmmmm11 个月前
&gt; There are a number of reasons for this. The AI could have been trained on misinformation; the answer could require some extrapolation from facts that the LLM isn&#x27;t capable of; or some aspect of the LLM&#x27;s training might have incentivized a falsehood<p>This article seems rather contrived. They present this totally broken idea of how LLMs work (that they are trained from the outset for accuracy on facts) and then proceed to present this research as it is a discovery that LLMs don&#x27;t work like that.
ajuc11 个月前
Simplistic version of this is just asking the question in 2 ways - ask for confirmation that the answer is no, then ask for confirmation that the answer is yes :)<p>If it&#x27;s sure it won&#x27;t confirm it both ways.
评论 #40756350 未加载
评论 #40756388 未加载
gmerc11 个月前
So the same as SelfCheckGPT from several months ago?
doe_eyes11 个月前
&gt; LLMs aren&#x27;t trained for accuracy<p>This assertion in the article doesn&#x27;t seem right at all. When LLMs weren&#x27;t trained for accuracy, we had &quot;random story generators&quot; like GPT-2 or GPT-3. The whole breakthrough with RLHF was that we started training them for accuracy - or the appearance of it, as rated by human reviewers.<p>This step both made the models a lot more useful and willing to stick to instructions, and also a lot better at... well, sounding authoritative when they shouldn&#x27;t.
评论 #40755991 未加载
techostritch11 个月前
This method seems to lean into the idea of LLM as fancy search engine rather than true intelligence. Isn’t the eventual goal of LLMs or ai that it’s smarter than humans. So I guess my questions are:<p>Is it plausible that LLM’s get so smart that we can’t understand them. Do we spend like years trying to validate scientific theories confabulated by AI?<p>In the run up to super-intelligence, it seems like we’ll have to tweak the creativity knobs up, like the whole goal will be to find novel patterns humans don’t find, is there a way to tweak those knobs that get us super genius and not super conspiracy theorist? Is there even a difference? Part of this might depend on whether or not we think we can feed LLM’s “all” the information.<p>But in fact, assuming that Silicon Valley CEO’s are some of the smartest people in the world, I might argue that confabulation of a possible future is in fact their primary value. Not being allowed to confabulate is incredibly limiting.
评论 #40759301 未加载
评论 #40759488 未加载