TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why knowledge overshadowing causes LLMs to hallucinate despite training on truth

2 点作者 thoughtpeddler10 个月前

1 comment

illuminant10 个月前
I think there is something to this knowledge overshadowing throughout human communications.<p>Speaking purposefully can be like writing an intricate query, careful to order the minimally specific attributes first, making further criteria more index efficient. Minds too have contextual index. Overloading presumptions and expectations explain much of our own misunderstandings.<p>For reliable results our questions must be well known to us.