<i>Strictly speaking, according to Bostrom, the kind of machine-based intelligence that is heading humanity’s way wouldn’t wish its makers harm. But it would be so intent on its own goals that it could end up crushing mankind without a thought, like a human stepping carelessly on an ant.</i><p>Like corporations.<p>What happens when computers get good at management? A network of computers may be able to outperform human managers. Even if they're not quite as smart as the smarter humans, a network of computers can coordinate better than a meeting of people. Once computer-run companies start producing better returns than human-run ones, the computer-run ones will dominate. That's basic capitalism.<p>This doesn't imply that the organization is entirely automated. It just means humans aren't at the top. If it produces more profits, companies will be forced by investors to take that route.
I always find it funny how the only people who think that computers will overtake humans are the ones who don't deal with them at a basic level every day.<p>"It will get so smart and so capable that it will destroy us" is somewhat hard to believe when you realize that these types of AI will likely be so stupid when it comes to the outside world they would trip on the power cord and stop the apocalypse themselves.
There is an epistemological problem. If you want to program the machine to respect that men is the most important animal in this planet you can't base that only on intelligence because then the machine can deduce correctly than once they become more intelligent than us, they should occupy the throne and men would be then only an appreciate animal (a dog, a sheep, a monkey?). We need a Turing test for any program intended to program such a supercomputer, it should be necessary that the program deduce logically the Great Axiom: Men is top animal of this planet. In case the boot program is not able to proof the Great Axiom, the machine could proceed to self-destruction.<p>If there isn't any way to construct a logical system in which men is the more important animal of this planet, then we are doomed to be dominated by the machine, because our throne based in our intelligence should be now the justification for the machine to take the lead and take us as their dogs or sheeps.
Its fun to think about terminator-like disaster scenarios, but isn't the solution as simple as: don't let the AI leak out? If we can contain humanity ending viri in labs then surely we can contain an exponentially growing and potentially humanity ending AI.<p>Am I missing something? Is the plan to actually create robots that can replicate and grow at exponential rates and turn it loose on the environment?<p>AI will be regulated just like everything else.
"... And with the accelerating pace of technological change, it wouldn’t be long before the capabilities – and goals – of the computers would far surpass human understanding."<p>"...In their single-mindedness, they would view their biological creators as mere collections of matter..."<p>These two sentences are contradictory.
What I would find difficult to explain (or program) to a machine is that human right ends with certain frontiers. One meter at one side the frontier your life is abysmally important, one meter at the other side your life is of utmost importance. I wonder where the machine would put the frontier if one day they have to assess the value of our lives.
David Brin (you may remember him as the author of "The Postman", but not the screenwriter of the movie) has a fantastic novel on this.[0]<p>[0] <a href="http://www.amazon.com/Existence-David-Brin-ebook/dp/B0079XPMQS" rel="nofollow">http://www.amazon.com/Existence-David-Brin-ebook/dp/B0079XPM...</a>
Wouldn't they(AI) do the same mistakes and create a superior race and so on?<p>AI seems like evolution to me - only if we put our DNAs in the AI, that would make everyone happy I guess?<p>I mean, children do not kill their parents even though they are more intelligent(evolution wise).