Let's take the assumption that we as humans do take precautionary steps to prevent actual Artificial Intelligence from doing harm to it's creators (us).<p>1. We create rules for the AI to follow, these are both morally defined, and logically defined within their codebase.<p>2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.<p>3. The AI has no care for human rights and can attack, and do harm.<p>This is a very simple, and easy to visualize case. To believe that #2 is impossible, is to play the part of the fool.<p>On a bright note, the most likely situation which I can conjure of Artificial Intelligence taking is that of a brexit from the human race.<p>Seeing us as mere ants in their intelligence they would most likely create an interconnected community and leave us altogether in their own plane of existence. I think "Her" took this approach to the artificial intelligence dialog as well.<p>After reviewing human psychology and social group patterns that seems like the most likely situation. We wouldn't be able to converse fast enough for AI to want to stay around, and we wouldn't look like much of a threat since they would have majority power. We would be less than ants in their eyes, and for most humans, ants that stay outside don't matter.<p>---<p>Outside of actual AI, the things we see today, the simplistic mathematical algorithms that determine your cars location according to the things around it, and money handling procedures, and notification alert systems will hardly harm humans and will only be there to benefit until they fail.