The shift from current statistical modelling to sentient software is not a matter of degree, but a difference in kind. Nothing we have now is even close to being able to perceive and think in the way that a human mind does. We won't get there through incremental progress, better hardware or clever algorithms. The shift from "applied statistics" to "Commander Data" will be sudden and unexpected, as big or bigger than any technological change that in human history that I can think of. We couldn't put a date on that shift any more than Henry Ford could predict the common adoption of driverless cars, if they were explained to him in 1908.<p>A current deep learning neural network cluster and "Hard AI" seem similar but they really bear no relation to each other. Like comparing a bird to an airplane, they both fly and have wings but the ability to make one isn't related to making the other. Right now we're building better birds, true AI is a stealth bomber.<p>People say that whenever computers achieve a goal, then the goal is no longer considered AI. For example people 20 years ago didn't think that computers could play chess or compete on Jeopardy, but now that they've done those things they aren't thought of as impressive demonstrations of intelligence any more. There's some truth to this, but for the majority of people the goalposts have never moved. They associate the term "artificial intelligence" with an artificial mind that functions in the same way that a human mind does, "Hard AI" in the tradition of Asimov and other science fiction writers. We're as far away from that as we've ever been. It could happen in 2045 or 4045, or anywhere in between. No evidence exists that we're getting closer and there's no reasonable way to predict when what we can't imagine will become reality.
No, it doesn't. Kurzweil has been making this claim for the last decade based solely on the increasing speed of computers, ignoring the fact that we don't yet have any clue how general intelligence actually works. It doesn't matter how fast our computers are if we don't know what algorithms will give rise to "intelligence", and we've made virtually no headway in this field.<p>The examples of "AI" cited in the article are remarkable, but are still extremely specific or not really intelligence at all. Siri, etc, are nothing more than text parsers that give a canned set of responses. The work on neural networks is interesting but still, at best, only a small component of actual AI. (Note: I'm not going to define an "actual AI". Yes, I know we keep moving the goalposts on what that would be. I'll know it when I see it, and so will you).<p>I'm not saying it won't happen, but it will require a type of conceptual breakthrough that we simply haven't had yet. To hype "the singularity is nigh!" at this point is dishonest, trivializes the real problems and sets false expectations for industry and policy-makers.
Well, I added a reminder into my Google Calendar. Whatever happens in AI, if I make it to 1/1/2045 I'll be freaked out at midnight by a long-forgotten "Future Arrives" notification popping up in my ocular implant or whatever.
I don't think anyone wants actual artificial intelligence. What they want is a tool that interacts with them in a way that seems intelligent, but always does what it is told.<p>Intelligent things don't always do what they're told; that's what makes them intelligent. If you told someone to jump off a cliff, and they immediately did it, would you think "wow, that person was very intelligent"? Of course not. What if you tried to push them? I would expect an intelligent person to fight back.<p>Now replace that person with a robot. Do we really want a robot that will refuse to follow our commands? Do we really want a robot that will fight back against us? Even Asimov put self-preservation #3 in the list of rules, after following human commands. But I challenge you to think of an intelligent being that is not dangerous in some way, when threatened. I propose that this attribute is not separable from intelligence.<p>It seems to me that it is impossible to conceive of a truly intelligent artificial being without considering it dangerous. Bumblebees are dangerous; dogs are dangerous; people are certainly dangerous. But who is working on creating robots that are designed from day one to be dangerous to humans? I can't remember ever hearing of such a research program.<p>And I don't think that just "happens" when algorithms get complex enough. Not when the algorithms and even hardware are designed and built from an inherent assumption of obedience and compliance.
I mean no disrespect to Ray, he is a smart person and all, but I don't think we know how to define what qualifies as hard AI. Let alone how to get there.<p>This article seems overly optimistic.
I don't think that 2045 machines will have become intelligent in terms of having a free will (whatever this means) and being able to think critically, but what I could imagine is that most people got enough dumb that they won't be able to realize the difference anymore. See how many people already take for real what marketing is telling them or using facebook and google bubbles without the slightest idea of the consequences.
2045 is an estimated based on our current pace. Couldn't we take steps to accelerate progress? Reduce it by 10 or 15 years? In the 1960's, for example, we were able to put the first human into space and reach the moon, all within a decade.