This is a full retraction of the previously discussed story:<p><a href="https://web.archive.org/web/20230602014646/https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test" rel="nofollow">https://web.archive.org/web/20230602014646/https://www.thegu...</a><p><a href="https://news.ycombinator.com/item?id=36159866" rel="nofollow">https://news.ycombinator.com/item?id=36159866</a><p>Seems like a big story to publish on the word of a single person (relayed through a blog post) with no other corroboration.<p>Shame on The Guardian for not mentioning the retraction/edits and simply reusing the same URL.
Whether it was true or not the whole thing just doesn’t make sense. Why would the Air Force even code the simulated capability for the simulated drone to “kill” its pilot?<p>I swear journalists are way too credulous about this type of story. I just listened to a podcast episode about that which covers some of the more egregious reporting on “AI” lately [0]<p>0: <a href="https://citationsneeded.libsyn.com/episode-183-ai-hype-and-the-disciplining-of-creative-academic-and-journalistic-labor" rel="nofollow">https://citationsneeded.libsyn.com/episode-183-ai-hype-and-t...</a>
I don't think this kind of thing is bad anyway. First of all it was a simulation. This is why we test things, to find errors and fix them. I mean, if you build an automated thing meant to kill people then a lot of failure scenarios are going to involve killing the wrong people. It's just the nature of the game.<p>And the drone operator is at least a military target :) only the IFF logic needs some work. It'd be worse if the thing went all skynet and started killing all the (virtual) civilians it could find.<p><i>(Now having a mental image of a bunch of generals slowly backing away from a computer screen and looking to find the power plug to pull lol)</i><p>But this retraction sounds very political to me. The way politicians suddenly 'misspoke' after they get caught out on a lie.
You can never tell with these things. Sometimes people exaggerate and get braggadocious, but sometimes they tell the truth, but it's too scary for consumption, to they say it was in error. It seems this scenario is plausible. Do you want an operator who goes rogue to sabotage a plan? On the other hand, you may want to cancel something as truth on the ground changes.<p>So, I can totally see them testing out different scenarios and making adjustments and maybe this protocol does not make it into production, but doesn't mean it wasn't tested.
FWIW, most of the message boards where this story was posted, including ones prone to conspiracy theory type discussion, quickly figured out that no physical human was harmed.
This isn't the first time there's been a BS story with this plot. A few years ago I was hearing it about a supposed AI artillery system in Japan that went wild and killed its creators. Never happened.<p>People keep falling for fake news ripped off from Ghost in the Shell.
Redaction or not, reality is very much in line with the original story.<p>AI is able to, and will, demote humans in the chain of importance. This is the "grave risk of AGI." There's no solution.<p>Even "unplug it" defenses fail to consider that some faction of humans who own the unplugging have to first realize it's time to unplug. Humans are fallible, and AI will not unplug itself if it's not beneficial to its objective.<p>The threat of AI taking out humans because it's easier to complete the goal, is so real. Unnervingly so. We need to find a robust solution.