TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Artificial Intelligence versus Mission Command

20 pointsby ChanderGover 9 years ago

2 comments

bootheadover 9 years ago
Commanders already have ample opportunity to micro manage. As Stan McChrystal mentions in his book Team of Teams, he was often watching real time video and had real time comms with troops on the ground carrying out ops. He resisted the desire to interject, citing a desire to push as many decisions to the edges of the network.<p>I don&#x27;t see AI (at least in it&#x27;s current form) in a position to make strategic decisions. I see AI increasing the fidelity of and extracting patterns from information flowing through the battle space (or boardroom). So I see the greatest contribution that AI can make at the moment in the OO (Observe, Orient - what do I see and what does it mean) of OODA, with the Decide and Act still firmly the remit of humans.
评论 #10637253 未加载
denniskaneover 9 years ago
&gt; Let&#x27;s suppose our AI must chose, in a flurry of combat, between sacrificing the life of a friendly soldier, or killing two other people, likely to be civilians.<p>This question of programmatically determining &quot;friend&quot; vs &quot;foe&quot; is highly problematic, to say the least. The only reason why humans make such distinctions is because they rely on them in order to ensure their own physical survival, so they can successfully propagate the species.<p>In order for a lifeless machine to make these kinds of distinctions, there must exist some kind of objectively verifiable calculation procedure that decides what exactly makes another human friendly or not. If this calculation procedure is simply meant to mimic the subjective calculations that are made by human military strategists, then this technology could not properly be considered the kind of interesting problem that AI researchers would want to work on. But if it is indeed meant to be objectively valid, then it will surely need to initiate a deep learning function that can very easily come up with a conclusion that finally determines that the so-called &quot;friend&quot; as determined by the human military strategist is actually a foe that needs to be eliminated.<p>So I think that the entire concept of developing highly sophisticated autonomous agents is inextricably wound up in the interesting, objective question of what it truly means to be human, rather than the more prosaic, subjective question of what it means to be a certain type of human that happens to judge another type of human as friend or foe.