In 2009, yes, it looks like a sure bet that AI will pass human intelligence by 2020.<p>In 2019, yes, it will look to be a sure bet that AI will pass human intelligence by 2030.<p>Meanwhile computers will continue doing more and more important things by the boring expedient of taking lots of data and crunching on it. See: credit scores, which nobody considers "artificial intelligence" despite the fact that it essentially involves having a sophisticated algorithm make evidence-based predictions of the future based on what could easily be confused with a character judgment.
For some definition of human intelligence, yes. For some, no. Same answer as every other year since the invention of the computer. This is not an interesting question (or article) without providing some reason for why your particular definition of human intelligence (the article doesn't give any definition at all) is the right one to consider.
I would feel more comfortable with such predictions if research were clearly walking the wooded path to AI. Moore's law works because<p><pre><code> 1. We already have processors
2. We have a metric to measure processor speed
</code></pre>
Then the path we're hiking is one <i>mostly</i> of incremental improvments, with occasional boulder of innovation.<p>Whereas the 'intelligent' agents we currently have do not seem particularly similar in <i>quality</i> to intelligence we already know; the path from contemporary AI to strong AI is hardly a trail at all --- it's all boulders; innovation all the way up.<p>[The claim I make about 'quality' is vague; in part necessarily so. If I could pinpoint my discomfort with current methods, I could propose a new course of action based on a new metric. Nevertheless I feel that the high-mathematical bent of current machine learning techniques (proto-value RL, statistical relational methods, etc) will lead to excellent answers, but does not point towards the flexibility of general intelligence. Yrom the other camp, low-level connectionist methods have not to my mind offered significant results in problem solving.]
I think it's worth separating two statements that seem similar: "Computer will surpass human intelligence" and "Artificial Intelligence will surpass human intelligence".<p>The two statements aren't necessarily equivalent if we say AI involves an explicit understanding of what constitutes intelligence. AI doesn't seem to be making great progress on the understanding intelligence front. However raw computers might exceed human intelligence if we are able to, say, just directly simulate a brain but, say, increase the clock speed and maybe the number of neurons. But if it turns out that we create something highly intelligent <i>that way</i>, we will be creating something potentially dangerous since we really will have no understanding of how benevolent it will or won't be.
No. Real AI is like fusion power. It will arrive in 40 years, for any definition of now. We are barely scratching the surface of our knowledge of the brain. We are orders of magnitude away from its processing power. We dont even have an agreed upon definition of consciousness. Will ever get there? Sure. But first we will see prostheses like vision and augmentation mind computer interfaces.
Certainly not.<p>A natural intelligence could eventually coalesce from emergent properties of H. Sapiens networked information processing systems, but not within 10 years.
There is no way in hell AI will "surpass human intelligence" by 2020, using any reasonable interpretation of the sentence. I guess it makes a good headline though.
Kurzweil on 2029: Although computers routinely pass the Turing Test, controversy still persists over whether machines are as intelligent as humans in all areas.