Strong AI (the creation of a general intelligence capable of learning new things in the way humans can, as opposed to weak AI which includes programming machines to do specific tasks that are usually considered to require intelligence, such as playing chess or flying a helicopter) is about 20 years away...and it has been about 20 years away ever since the 1950's.<p>The traditional approaches to build a strong AI all involved forms of explicit knowledge representation: where facts about the world would be stored in memory and then pieced together to come to new (hopefully intelligent) conclusions and plan actions. Most people quickly realized that this approach doesn't scale at all: most knowledge that humans have about the world is very context-dependent and fuzzy. The new approaches around machine learning and probabilistic modeling are more promising but almost all of them involve building mathematical models around a specific problem rather than building a general learning machine.<p>Most leading AI researchers in academia aren't even trying to build a general-purpose intelligence anymore, instead focusing on solving domain-specific problems, and the field of AI has been much more successful over the last fifteen years or so as a result.