Here's the problem with the "Let's just agree not to do this research!" plan that everyone seems to suggest when they start thinking about existential risks: when we're sitting around in 2030 with a million times more computing power at our fingertips than we have today, constructing a workable AI just isn't going to be that difficult of an engineering problem. We already know the equations that we'd need to use do general intelligence, it's just that they're not computable with finite computer power, so we'd have to do some approximations, and at present it's not realistic because the approximation schemes we know of would work too slowly. Pump up our computer power a million times and these schemes start to become a lot more realistic, especially with some halfway decent pruning heuristics.<p>It's bad enough that (IMO) by 2040 or so, any reasonably smart asshole in his basement could probably do it on his laptop with access to only the reference materials available <i>today</i>; I have no idea how you avoid that risk by making some political agreement. Hell, ban the research altogether on pain of death, and there's still going to be some terrorist team working on it somewhere (and that's even if all the governments actually stop work on it, which they won't).<p>The only positive way out of this is to go to great pains to figure out how to design safe (friendly) AI, and to do so while it's still too difficult for random-dude-with-a-botnet to achieve (and preferably we should do it before the governments of the world see it as feasible enough to throw military research dollars at). We need to tackle the problem while it's still a difficult software problem, not a brute-force one that can be cracked by better hardware.