'AI will not be a separate intelligent entity, it will be the extension/evolution of existing intelligent entity, that is us.'<p>With respect to intellectual threat I don't think you can say that. Your assumption is that any behaviour resulting from the creation of a separate intelligent entity (no one has the slightest idea how to do this) will be predictable (or be constrained by us) - having evolved 'from us'. Why should that be?
People don't really understand AI. I wrote a post about why AI is not a threat and the arguments are more of logical and fact driven then philosophical. <a href="https://medium.com/@ankur_dhama/artificial-intelligence-a-threat-d525799f912b" rel="nofollow">https://medium.com/@ankur_dhama/artificial-intelligence-a-th...</a>
There are seven questions in this article, and topics like machine learning and program verification are in quotes.<p>I don't think a competent engineer was consulted on this article.<p>Perhaps as a community we should elucidate in a more formal fashion to the press.
I've been giving this a lot of thought lately because of Musk and Hawking. I think they are wrong, for a couple simple reasons.<p>1) A true AI wont care about us, or probably anything. It may just suicide.<p>2) Very simple to destroy. EMP, etc.
In one respect, AI is already damaging the sense of self worth of swathes of humanity by making them economically redundant. This will be an accelerating threat for sometime.