TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Given a sufficiently complex argument, people deduce anything they like

4 点作者 ricardo815 天前
Is there a principle for such a thing?<p>Anecdote: People who choose to believe in something can search the web and find a conclusion that they already had by finding something agreeable.<p>The person may be reasonably objective, but given enough technobabble, they&#x27;ll reach the conclusion they already had.

4 条评论

90s_dev5 天前
&gt; Is there a principle for such a thing?<p>Confirmation bias.
评论 #43897038 未加载
beardyw5 天前
It seems to apply to AI as well, so don&#x27;t be too judgemental.
gogurt20004 天前
To me that sounds like sophistry (unintentional or not). Wikipedia summarizes it nicely:<p>&quot;Sophistry&quot; is today used as a pejorative for a superficially sound but intellectually dishonest argument in support of a foregone conclusion.<p>Loosely related: The 60&#x27;s scifi novel &quot;The Moon Is a Harsh Mistress&quot; explored the idea of computers with powerful enough AI that they could construct a logically persuasive argument for any stance by cherry picking and manipulating the facts. In the book I think they called those computers Sophists, which seems particularly relevant today. You can absolutely ask an LLM to construct an argument to support any stance and, just like in the book, they can be used to produce misinformation and propaganda on a scale that makes it difficult for humans to discern the truth.
bjourne4 天前
Can you give an example?