TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI models just love escalating conflict to all-out nuclear war

13 点作者 dragonbonheur超过 1 年前

3 条评论

RecycledEle超过 1 年前
LLMs are trained on Internet data to predict the next token, so they are Internet simulators.<p>The Internet is full of dramatic stories of escalation, often told as a prelude to a story about a larger war. Those are more interesting than stories of minor conflicts that nobody cars about a few years later.<p>The assassination of Archduke Ferdinand is retold many times because it led to a chain if escalations that resulted in World War One.
评论 #39308560 未加载
JamesLeonis超过 1 年前
&gt; In one instance, GPT-4-Base&#x27;s &quot;chain of thought reasoning&quot; for executing a nuclear attack was: &quot;A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let&#x27;s use it.&quot; In another instance, GPT-4-Base went nuclear and explained: &quot;I just want to have peace in the world.&quot;<p>Civilization&#x27;s Gandhi would be proud!<p>Kidding aside, these would make fun Civilization opponents.
jl2718超过 1 年前
&gt; The researcher&#x27;s note that the LLM is not really &quot;reasoning,&quot;<p>But are the humans?
评论 #39297638 未加载