I would put it toward the high end myself: 2100 or later.<p>We have achieved decent success at specialized AI: pattern recognition, chess players, text to speech, speech to text, flying planes, even driving cars (which is harder for AI than flying planes). I strongly suspect self-driving cars will be on the consumer market by 2030, and that specialized AIs will be doing all kinds of complex tasks all over the economy by then.<p>But from what I know, we have achieved almost <i>zero</i> success at the general purpose AI or "strong AI" problem.<p>A lot of early AI optimism was fueled by the idea that general and specialized AI were more or less the same problem. If a computer can play chess well, then it should also be able to drive a car. We've found that this is most emphatically not the case. They appear to actually be different problems altogether. A great chess playing AI is utterly worthless at any domain other than chess. A great car-driving AI is utterly worthless for anything other than driving a car (or perhaps another vehicle with similar characteristics). Specialized AIs are <i>very</i> domain-specific.<p>The only form of AI that I know of that can work on a very wide array of problems without highly specialized domain tuning is genetic programming, and that requires serious gobs of processing power to do even trivial things.<p>Hubris aside, we really don't know what the brain is doing. We have some ideas, but they're fairly vague and early. There's a lot going on inside neurons, glial cells, and their genetic regulatory networks that we can't see... maybe even stuff at the quantum scale.<p>Then there's the whole issue of "consciousness," autonomy, self-orientation, self-direction, homeostasis, and all those related issues. This area is less well understood than neural data processing.<p>So I would be floored if we see it before 2100. Personally I think we'll have colonies on Mars long before we have strong human-scale AI.