TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The more sophisticated AI models get, the more likely they are to lie

5 点作者 einarfd8 个月前

2 条评论

jqpabc1238 个月前
In other words, answers derived from statistical processes are not very reliable.<p>Who knew?<p>In some ways, LLMs are anti-computers. They negate much of the utility that made computing popular --- instead of reliable answers at low cost, we get unreliable answers at high cost .
richrichie8 个月前
It is wild how humanised neural networks have become! The use of terms like “lying” or “hallucination” even in research setting is going to be problematic. I can’t articulate well, but it is going to restrict our ability to problem solve.