TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

US Air Force colonel ‘misspoke’ about drone killing pilot

39 pointsby chillycurvealmost 2 years ago

9 comments

chillycurvealmost 2 years ago
This is a full retraction of the previously discussed story:<p><a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230602014646&#x2F;https:&#x2F;&#x2F;www.theguardian.com&#x2F;us-news&#x2F;2023&#x2F;jun&#x2F;01&#x2F;us-military-drone-ai-killed-operator-simulated-test" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230602014646&#x2F;https:&#x2F;&#x2F;www.thegu...</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36159866" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36159866</a><p>Seems like a big story to publish on the word of a single person (relayed through a blog post) with no other corroboration.<p>Shame on The Guardian for not mentioning the retraction&#x2F;edits and simply reusing the same URL.
dopylittyalmost 2 years ago
Whether it was true or not the whole thing just doesn’t make sense. Why would the Air Force even code the simulated capability for the simulated drone to “kill” its pilot?<p>I swear journalists are way too credulous about this type of story. I just listened to a podcast episode about that which covers some of the more egregious reporting on “AI” lately [0]<p>0: <a href="https:&#x2F;&#x2F;citationsneeded.libsyn.com&#x2F;episode-183-ai-hype-and-the-disciplining-of-creative-academic-and-journalistic-labor" rel="nofollow">https:&#x2F;&#x2F;citationsneeded.libsyn.com&#x2F;episode-183-ai-hype-and-t...</a>
评论 #36169362 未加载
wkat4242almost 2 years ago
I don&#x27;t think this kind of thing is bad anyway. First of all it was a simulation. This is why we test things, to find errors and fix them. I mean, if you build an automated thing meant to kill people then a lot of failure scenarios are going to involve killing the wrong people. It&#x27;s just the nature of the game.<p>And the drone operator is at least a military target :) only the IFF logic needs some work. It&#x27;d be worse if the thing went all skynet and started killing all the (virtual) civilians it could find.<p><i>(Now having a mental image of a bunch of generals slowly backing away from a computer screen and looking to find the power plug to pull lol)</i><p>But this retraction sounds very political to me. The way politicians suddenly &#x27;misspoke&#x27; after they get caught out on a lie.
mc32almost 2 years ago
You can never tell with these things. Sometimes people exaggerate and get braggadocious, but sometimes they tell the truth, but it&#x27;s too scary for consumption, to they say it was in error. It seems this scenario is plausible. Do you want an operator who goes rogue to sabotage a plan? On the other hand, you may want to cancel something as truth on the ground changes.<p>So, I can totally see them testing out different scenarios and making adjustments and maybe this protocol does not make it into production, but doesn&#x27;t mean it wasn&#x27;t tested.
评论 #36168592 未加载
Mountain_Skiesalmost 2 years ago
FWIW, most of the message boards where this story was posted, including ones prone to conspiracy theory type discussion, quickly figured out that no physical human was harmed.
评论 #36172199 未加载
ilikeitdarkalmost 2 years ago
There is a theory going around that an AI created the story, and another AI is trying to discredit the story. Maybe I&#x27;m an AI.
George83728almost 2 years ago
This isn&#x27;t the first time there&#x27;s been a BS story with this plot. A few years ago I was hearing it about a supposed AI artillery system in Japan that went wild and killed its creators. Never happened.<p>People keep falling for fake news ripped off from Ghost in the Shell.
scrum-treatsalmost 2 years ago
Redaction or not, reality is very much in line with the original story.<p>AI is able to, and will, demote humans in the chain of importance. This is the &quot;grave risk of AGI.&quot; There&#x27;s no solution.<p>Even &quot;unplug it&quot; defenses fail to consider that some faction of humans who own the unplugging have to first realize it&#x27;s time to unplug. Humans are fallible, and AI will not unplug itself if it&#x27;s not beneficial to its objective.<p>The threat of AI taking out humans because it&#x27;s easier to complete the goal, is so real. Unnervingly so. We need to find a robust solution.
评论 #36170392 未加载
phendrenad2almost 2 years ago
Man, what happened to the USAF? NSA pays better?