TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

An approach for automated fact checking

1 点作者 domysee大约 1 个月前

1 comment

mike_hearn大约 1 个月前
It&#x27;s a cool attempt, but the included example shows some of the problems. One of them is that it tends to assume a fact check of a quote is simply asking whether the person really said that or not, but the news article will usually be the only place a quote appears and therefore such sentences are frequently listed as unverifiable. Most users will be wanting to know if the claim in the quote itself is true, not whether the fact of saying it was true. That&#x27;s probably fixable with more prompting.<p>The reliance on search results is a bigger weakness, although it&#x27;s hard to know how to avoid it. For the sort of claims that people often want to check, they are in political dispute. Unfortunately the left uses the word fact-checking to mean censorship of the right, and Google has been engaged in this activity for a very long time. Meaning that unless you use alternative search engines or know how to work around their blocks true information simply may not surface at all when searching for it.<p>The biggest issue for trusting news though is simply omitted information. Last night I watched a report on the BBC for the first time in several years, and it was immediately misleading. It was talking about the new Trump tariffs, and as far as I know no statement in it was exactly incorrect, however there was no mention of the US position of tying tariffs to free speech. Instead the report gave the impression that there was nothing specific the Americans had raised in talks with the UK on the topic. This is entirely predictable behavior by the BBC but the approach of checking individual sentences will never be able to reveal it, whereas a gestalt approach where an LLM is asked to verify an article in one go possibly could.
评论 #43566425 未加载