TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Artificial Intelligence versus Mission Command

20 点作者 ChanderG超过 9 年前

2 条评论

boothead超过 9 年前
Commanders already have ample opportunity to micro manage. As Stan McChrystal mentions in his book Team of Teams, he was often watching real time video and had real time comms with troops on the ground carrying out ops. He resisted the desire to interject, citing a desire to push as many decisions to the edges of the network.<p>I don&#x27;t see AI (at least in it&#x27;s current form) in a position to make strategic decisions. I see AI increasing the fidelity of and extracting patterns from information flowing through the battle space (or boardroom). So I see the greatest contribution that AI can make at the moment in the OO (Observe, Orient - what do I see and what does it mean) of OODA, with the Decide and Act still firmly the remit of humans.
评论 #10637253 未加载
denniskane超过 9 年前
&gt; Let&#x27;s suppose our AI must chose, in a flurry of combat, between sacrificing the life of a friendly soldier, or killing two other people, likely to be civilians.<p>This question of programmatically determining &quot;friend&quot; vs &quot;foe&quot; is highly problematic, to say the least. The only reason why humans make such distinctions is because they rely on them in order to ensure their own physical survival, so they can successfully propagate the species.<p>In order for a lifeless machine to make these kinds of distinctions, there must exist some kind of objectively verifiable calculation procedure that decides what exactly makes another human friendly or not. If this calculation procedure is simply meant to mimic the subjective calculations that are made by human military strategists, then this technology could not properly be considered the kind of interesting problem that AI researchers would want to work on. But if it is indeed meant to be objectively valid, then it will surely need to initiate a deep learning function that can very easily come up with a conclusion that finally determines that the so-called &quot;friend&quot; as determined by the human military strategist is actually a foe that needs to be eliminated.<p>So I think that the entire concept of developing highly sophisticated autonomous agents is inextricably wound up in the interesting, objective question of what it truly means to be human, rather than the more prosaic, subjective question of what it means to be a certain type of human that happens to judge another type of human as friend or foe.