I always though our biggest existential threat were Nukes and other WMDs. AI could potentially become yet another existential threat (especially if mixed with WMDs).<p>I would not fret over computers using AI to "see" or "hear" or even learn how to walk. I would start worrying the day computers start formulating judgments, I could not find any research in that direction. Anyone knows if such research exists?<p>On a more personal note, i can't say i agree with some people picking on Elon for formulating this opinion. Especially that it does not sound so unreasonable, really.
As a result of these fears, AIs will be sandboxed:<p><a href="http://goo.gl/AUJH4t" rel="nofollow">http://goo.gl/AUJH4t</a><p>Subsequently having their interactions monitored and restricted for a period of probation:<p><a href="http://goo.gl/3XJjHY" rel="nofollow">http://goo.gl/3XJjHY</a><p>Look familiar?
Wishful thinking. In the unlikely event that AI reaches the level of housefly within my lifetime, it will occur at a level above humanity, in the same way that an individual's conciousness occurs at a level above brain cells.