A bit more scary is the political consequences. One of the strongest controls on rogue generals is that a fair chunk of their troops will mutiny if told to destroy their home country for said general's benefit, and logistical support would vanish.<p>Armies of killer robots start radically changing the calculations. It isn't obvious how but it probably removes a lot of the interest of more ordinary people and makes it a question of what the elites want. If the programmers are on side and the factories are mostly automatic, then the ability of people to push back on their own military becomes complicated.
"The Second Amendment in Iraq, Combat Robotics, and the Future of Human Liberty" argues that combat robots are <i>unconstitutional</i> for the US to develop, because they render the second amendment useless.<p><a href="http://vinay.howtolivewiki.com/blog/bigdeal/the-second-amendment-in-iraq-combat-robotics-and-the-future-of-human-liberty-820" rel="nofollow">http://vinay.howtolivewiki.com/blog/bigdeal/the-second-amend...</a>
Ok, so if the robot commits war crimes, who is responsible?<p>The designer? The soldier who turned it on? The general who deployed a couple thousand to the urban center before they had the “bug”
>> could not be understood by people and thus programmers are detached from the decisions making by machines.<p>This isn't really true. A programmer may not understand the exact mechanism of computation but they will have trained it in a particular way using particular training data to do a particular purpose. That purpose is known to the programmer so this fact actually doesn't change anything. It is possible the robot will occasionally act in an unexpected way, but the majority of the time it will follow it's programmed objective, even if following that objective unpredictably.