I think any analysis of the dangers of AI need to consider the principles of evolution. Human intelligence is a product of humans evolving in a natural world, where individuals were selected for their ability to compete for resources in that world. This has produced many characteristics, one of which is violence, which has sometimes been necessary to secure those resources.<p>AI robots will probably not be intentionally violent against humans. AIs are also evolving, but in an artificial world. In this world AIs are competing for humans favor. If they do well, if we're happy with them, we grant them computing power and replicate them. This selects for very specialized AIs with very deterministic behavior. Nobody wants an AI with unpredictable behavior.<p>AIs that survive are the ones that are the most adapted to serving humans. The danger is not that the AI itself harms humans, but that humans wants to harm or exploit other humans through AI. There could be a danger in AIs developed by the military, but I'm not too worried because they'll most likely be extremely special purpose with multiple fail-safes. Nobody wants to develop an AI that could kill the ones developing/using it. I'm most worried about AIs developed for economic exploitation. It's what we're motivated to work on, it's the area where there's most development being done, so it's probably where we'll first see advanced AIs causing problems. Arguably we already have (algorithms used on social media platforms promoting disinformation)<p>The thought that AIs will somehow gain some kind of general intelligence where it'll find that the logical thing to do is to eliminate humans is a fantasy. We don't select for AIs with general intelligence, if there even is such a thing. Most likely we are overestimating our own intelligence. It's probably not as "general" as we like to think. We don't generally kill because it's the logical thing to do, but because of emotional reactions which are a product of our evolution.<p>The example of the paperclip maximizer is really dumb. Such an AI would not be selected for general intelligence, and there's no reason to think general intelligence will occur accidentally. Even if it somehow gained this magical general intelligence, the decision of whether to murder humans to secure metal resources, or work with them, is probably absolutely undecidable. Even the most intelligent AI imaginable could not consider all the factors. The default would be no action. An AI would not have emotions produced through natural evolution that it could use as heuristic to decide what to do here. Not a problem for us humans. We have a built-in drive to consider killing someone outside our group, even if there's no rational argument to do it.