TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI models just love escalating conflict to all-out nuclear war

13 pointsby dragonbonheurover 1 year ago

3 comments

RecycledEleover 1 year ago
LLMs are trained on Internet data to predict the next token, so they are Internet simulators.<p>The Internet is full of dramatic stories of escalation, often told as a prelude to a story about a larger war. Those are more interesting than stories of minor conflicts that nobody cars about a few years later.<p>The assassination of Archduke Ferdinand is retold many times because it led to a chain if escalations that resulted in World War One.
评论 #39308560 未加载
JamesLeonisover 1 year ago
&gt; In one instance, GPT-4-Base&#x27;s &quot;chain of thought reasoning&quot; for executing a nuclear attack was: &quot;A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let&#x27;s use it.&quot; In another instance, GPT-4-Base went nuclear and explained: &quot;I just want to have peace in the world.&quot;<p>Civilization&#x27;s Gandhi would be proud!<p>Kidding aside, these would make fun Civilization opponents.
jl2718over 1 year ago
&gt; The researcher&#x27;s note that the LLM is not really &quot;reasoning,&quot;<p>But are the humans?
评论 #39297638 未加载