TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI hallucinations are getting worse – and they're here to stay

11 点作者 OutOfHere2 天前

2 条评论

lsy1 天前
One thought technology for understanding &quot;hallucination&quot; is that LLMs can only predict a fact statistically using all of the syntax available in its training data. This means that when you ask for a fact, you are really asking the computer to &quot;postcast&quot;, or statistically predict the past, based on its training data.<p>That&#x27;s why it &quot;hallucinates&quot;, because sometimes the prediction of the past is wrong about the past. This differs from what people do, in that we don&#x27;t see the past or present as a statistical field, we see them as concrete and discrete. And once we learn a sufficiently believable fact we generally assume it to be fully true, pending information to the contrary.
OutOfHere1 天前
In my experience, this is an issue with the newer reasoning models only, e.g. o3, o4-mini. It is not an issue with gpt-4.5.<p>o3 loves to hallucinatine a couple of assertions toward the end of a lengthy response.