TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test

56 pointsby nomagicbulletalmost 2 years ago

10 comments

og_kalualmost 2 years ago
This is the core problem of alignment right there<p>“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.<p>He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
评论 #36158865 未加载
FrustratedMonkyalmost 2 years ago
From another thread -- It wasn&#x27;t even a real simulation, just thought experiment.<p>UPDATE 2&#x2F;6&#x2F;23 - in communication with AEROSPACE - Col Hamilton admits he &quot;mis-spoke&quot; in his presentation at the Royal Aeronautical Society FCAS Summit and the &#x27;rogue AI drone simulation&#x27; was a hypothetical &quot;thought experiment&quot; from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation . &quot;In an update provided to Aerospace, Hamilton explained that he “misspoke” when telling the story, saying that the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation.<p>He said: “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome … Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”&quot;
isaacfrondalmost 2 years ago
Not only did no AI actually kill anybody. It didn&#x27;t even happen in simulation. The whole thing is an Asimov-esque fantasy. Let&#x27;s stick to the facts, shall we?<p>&gt; After this story was first published, an Air Force spokesperson told Insider that the Air Force has not conducted such a test
评论 #36164157 未加载
quantifiedalmost 2 years ago
Uh, it simulated killing its human operator. No one says that, but the description omits any detail of actual death, such as the age of the deceased operator.<p>It&#x27;s a harbinger of actual deaths someday.
评论 #36158999 未加载
评论 #36160188 未加载
评论 #36158099 未加载
wolverine876almost 2 years ago
&gt; He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”<p>Why aren&#x27;t there hard limits: &#x27;Protect our humans at all costs, protect our own assets, obey all laws of war.&#x27;? That seems like an obvious, fundamental consideration. Killing our own (and civilians) shouldn&#x27;t be a matter of &quot;points&quot;; it shouldn&#x27;t be done regardless of points.<p>It&#x27;s possible that the speaker just didn&#x27;t express it well.
评论 #36160620 未加载
sharemywinalmost 2 years ago
This is why Reinforcement is so fucking dangerous. if the AI can&#x27;t figure out which humans to kill we&#x27;re all screwed.
m463almost 2 years ago
Needs Asimov&#x27;s 3 laws.<p>(with a giant exception for the enemy)
评论 #36160222 未加载
评论 #36158814 未加载
benttalmost 2 years ago
“In USAF Simulated Test, AI-Controlled Drone Goes Rogue, &#x27;Kills&#x27; Human Operator”<p>feels a little different, eh?
评论 #36164259 未加载
FrustratedMonkyalmost 2 years ago
This is totally the alignment problem everyone is worried about.
jdm2212almost 2 years ago
This sounds kinda fake to me. Like, how did the AI have a concept of an operator, or the operator&#x27;s physical location, or comms equipment used to communicate with the operator, and how did it game out the consequences of destroying the operator or comms equipment? It would need an extremely sophisticated model of the world that&#x27;s well beyond anything GPT-4 evidences.<p>I&#x27;d guess the &quot;AI&quot; was another human in a wargame, not an actual AI.
评论 #36159643 未加载