> Let's suppose our AI must chose, in a flurry of combat, between sacrificing the life of a friendly soldier, or killing two other people, likely to be civilians.<p>This question of programmatically determining "friend" vs "foe" is highly problematic, to say the least. The only reason why humans make such distinctions is because they rely on them in order to ensure their own physical survival, so they can successfully propagate the species.<p>In order for a lifeless machine to make these kinds of distinctions, there must exist some kind of objectively verifiable calculation procedure that decides what exactly makes another human friendly or not. If this calculation procedure is simply meant to mimic the subjective calculations that are made by human military strategists, then this technology could not properly be considered the kind of interesting problem that AI researchers would want to work on. But if it is indeed meant to be objectively valid, then it will surely need to initiate a deep learning function that can very easily come up with a conclusion that finally determines that the so-called "friend" as determined by the human military strategist is actually a foe that needs to be eliminated.<p>So I think that the entire concept of developing highly sophisticated autonomous agents is inextricably wound up in the interesting, objective question of what it truly means to be human, rather than the more prosaic, subjective question of what it means to be a certain type of human that happens to judge another type of human as friend or foe.