I think the most important point is "Additionally, AI is not a single entity. ... AI is not a he or a she or even an it, AI is more like a 'they.'"<p>All the horrible "clippy" scenarios involve ONE AI that becomes super intelligent (and therefore powerful -- another fallacy) without any similarly intelligent and powerful entities around it. Instead we'll have incremental progress and if we ever do get super intelligent (but probably not super powerful) machines they'll be embedded in an ecology of other machines nearly as intelligent and quite likely more powerful.<p>I'm not saying this doesn't pose risks, but they aren't the risks that the AGI threat folks are studying.
This article only confirms Elon Musk's fears. "Decades away" and "only need to worry about the people behind it".<p>Well, yeah.
Very well said. Even if we achieve a full-fledged AGI, it would be a mistake to anthropomorphize it and assign it human-like intentions, desires and behavior - unless somebody explicitly programmed it that way. The idea of an "evil" AI seems downright silly to me.<p>That's not to say that an AI could never be dangerous in some scenario, but the "demon" comparisons and other recent hyperbole are, IMO, a bit misplaced.