TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Dola Decoding by Contrasting Layers Improves Factuality in Large Language Models

58 点作者 johnsutor10 个月前

3 条评论

prometheus7610 个月前
So will LLMs ultimately become realists, or nominalists?
评论 #40932649 未加载
totetsu10 个月前
&gt; exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers<p>This is surprising
评论 #40932636 未加载
评论 #40932852 未加载
photonthug10 个月前
Just call it correctness. Hallucination as an alternative to incorrect is fine for marketing I guess but factuality is especially awkward besides being pretty Orwellian.
评论 #40933455 未加载
评论 #40942517 未加载
评论 #40932997 未加载