TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Hallucination Is Inevitable: An Innate Limitation of Large Language Models

3 点作者 PerryCox超过 1 年前

1 comment

rapatel0超过 1 年前
This is pretty obvious no? LLMs are basically a lossy compression of their dataset. Being lossy, they will necessarily have error (hallucinations). Furthermore, the underlying data is human approximation of truth. Therefore it will have error as a well. Shall we publish a paper on the linked list?
评论 #39315219 未加载