TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

US military drone controlled by AI "killed" its operator during simulated test

72 点作者 chillycurve将近 2 年前

21 条评论

IEnjoyTapNinja将近 2 年前
The Guardian hasn&#x27;t done any due diligence and is spreading fake news based on an interviewer who seems to have misunderstood the context of the exercise.<p>&quot;He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go&#x2F;no go given by the human.&quot;<p>It is being established in the beginning of the story, that the drone needed confirmation from a human operator to attack a target, but no explanation is given how the drone would be able to kill its operator without his confirmation.<p>This is obviously absurd.<p>What I believe happened in reality: this was not a simulation, but a scenario. Meaning a story written to test the behavior of soldiers in certain situations. The drone did not behave according to the decisions taken by an AI model, but according to the decisions taken by a human instructor, who was trying to get the trainees to think outside the box.
评论 #36163808 未加载
评论 #36163204 未加载
arisAlexis将近 2 年前
The denial of x-risk is crazy here. This is literally a demo of what researchers like Hinton and Bengio are afraid of but most comments don&#x27;t believe it happened and the other think that it&#x27;s not a big deal. The human psyche never ceases to amaze.
评论 #36162401 未加载
评论 #36167752 未加载
评论 #36169978 未加载
joegibbs将近 2 年前
That seems incredibly advanced - how does the military already have AI that can reason that a comms tower should be destroyed to prevent it from receiving instructions like that?
评论 #36160561 未加载
评论 #36162865 未加载
评论 #36162231 未加载
评论 #36160966 未加载
评论 #36161503 未加载
评论 #36160349 未加载
chillycurve将近 2 年前
This has been fully retracted: <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;us-news&#x2F;2023&#x2F;jun&#x2F;02&#x2F;us-air-force-colonel-misspoke-drone-killing-pilot" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;us-news&#x2F;2023&#x2F;jun&#x2F;02&#x2F;us-air-force...</a><p>Original story: <a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230602014646&#x2F;https:&#x2F;&#x2F;www.theguardian.com&#x2F;us-news&#x2F;2023&#x2F;jun&#x2F;01&#x2F;us-military-drone-ai-killed-operator-simulated-test" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230602014646&#x2F;https:&#x2F;&#x2F;www.thegu...</a><p>Shame on The Guardian for not mentioning the retraction.
vivegi将近 2 年前
What is the role of the AI and what is the distinct role of the operator?<p>It looks like this is a <i>principal agent problem</i> (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Principal%E2%80%93agent_problem" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Principal%E2%80%93agent_proble...</a>):<p><pre><code> The principal–agent problem refers to the conflict in interests and priorities that arises when one person or entity takes actions on behalf of another person or entity. </code></pre> The same issues occur with self-driving cars where it is expected that the driver take over from the automation anytime (eg: driver wants to stop but AI wants to go or vice versa).
hourago将近 2 年前
As bad as nuclear weapons are, they still have a human being behind.<p>&gt; My suggestion was quite simple: Put that needed code number in a little capsule, and then implant that capsule right next to the heart of a volunteer. The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being.<p>You can solve the problem by giving the decision to an AI... the AI will not even blink before killing the human and getting the codes. Nuclear war would come fast and swift.
评论 #36163403 未加载
评论 #36162912 未加载
traveler01将近 2 年前
This title is super sensationalist. Any distracted reader will think that the drone killed an actual human being, which is a lie since the claims was that it happened in a virtual environment.<p>Anything for a click these days?
评论 #36163550 未加载
hedora将近 2 年前
[WONTFIX] Works as designed.
评论 #36160393 未加载
fennecfoxy将近 2 年前
Definitely just bad model&#x2F;test conditions&#x2F;scoring design. Of course the military is using home-grown fisher price models.<p>The reward function should primarily be based on following the continued instructions of the handler, not taking the first instruction and then following it to the letter.<p>What&#x27;s funny though, is that the model proved that it was adept at the task they gave it. Trying to kill the operator, then when adjusted pivoting to destroying the comms tower the operator used. That&#x27;s still clever.<p>As per usual the problem isn&#x27;t the tool, it&#x27;s the tool using the tool. Set proper goals and train the model properly and it would work perfectly. I think weapons should always require a human in the loop, but the problem is that there&#x27;ll be an arms-race where some countries (you know who) will ignore these principles and build fully autonomous no-human weapons.<p>Then, when our systems can&#x27;t react fast enough to defend ourselves because they need a human in the loop, what will we do? Throw out our principles and engage in fully-autonomous weaponry as well? It&#x27;s the nuclear weapons problem all over again...
评论 #36163587 未加载
News-Dog将近 2 年前
The Takeaway; Write better Code!<p><i>“We must face a world where AI is already here and transforming our society,” he said.<p>“AI is also very brittle, ie, it is easy to trick and&#x2F;or manipulate.<p>We need to develop ways to make AI more robust,<p>and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”</i>
qbrass将近 2 年前
It&#x27;s basically the plot to Peter Watts &#x27;Malak&#x27; except in that story, the drone&#x27;s decision to not engage targets was being overridden instead of the other way around.
philipkglass将近 2 年前
This story appears to be untrue. If you have a few minutes to spare read this better story, intentionally fictional, about a military drone that kills its operators.<p>Malak by Peter Watts:<p><a href="https:&#x2F;&#x2F;www.rifters.com&#x2F;real&#x2F;shorts&#x2F;PeterWatts_Malak.pdf" rel="nofollow">https:&#x2F;&#x2F;www.rifters.com&#x2F;real&#x2F;shorts&#x2F;PeterWatts_Malak.pdf</a>
Rebelgecko将近 2 年前
Worth noting that the story is possibly apocryphal or exaggerated for effect<p><a href="https:&#x2F;&#x2F;www.businessinsider.com&#x2F;ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6" rel="nofollow">https:&#x2F;&#x2F;www.businessinsider.com&#x2F;ai-powered-drone-tried-killi...</a>
评论 #36162261 未加载
more_corn将近 2 年前
This guy came out and said he misspoke. He imagined a situation where an AI might kill its handler so it could better complete the mission.<p>It was a thought experiment. AKA an imagined scenario. No real person died. No AI has gone rogue.
uninformed将近 2 年前
This article gave me a good laugh. Just proves how much AI is subservient to our will. We just have to be really clear about what we mean. I expect everyone&#x27;s communication skills to shoot up this century.
Aerbil313将近 2 年前
Hahaha, the long awaited news line of AI has finally came! (even though it&#x27;s not real, probably)<p>&quot;... a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.&quot;
King-Aaron将近 2 年前
Don&#x27;t Create the Torment Nexus
ftxbro将近 2 年前
i saw somewhere they were saying it looked too &#x27;on-the-nose&#x27; alignment hazard and the simulated test was bait to demonstrate how such a thing can be possible
评论 #36168851 未加载
评论 #36159947 未加载
exabrial将近 2 年前
This is a long stretch from what actually happened
pjmlp将近 2 年前
So basically we achieved ED-209 AI.<p>Now it needs the directives.
sillywalk将近 2 年前
<i>Not</i> literally
评论 #36161008 未加载