TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Hallucination Is Inevitable: An Innate Limitation of Large Language Models

3 pointsby PerryCoxover 1 year ago

1 comment

rapatel0over 1 year ago
This is pretty obvious no? LLMs are basically a lossy compression of their dataset. Being lossy, they will necessarily have error (hallucinations). Furthermore, the underlying data is human approximation of truth. Therefore it will have error as a well. Shall we publish a paper on the linked list?
评论 #39315219 未加载