No. Stop reading reddit.com/r/futurology or that _awful_ article by waitbutwhy.
Sure its a possibility but we're still making baby steps and tiny tools, pastiches of intelligence as opposed to genuine intelligence or conscious.<p>People who ask questions such as this often don't consider that it remains eminently possible that AGI is an impossibility for us to build. Also remember that anything an AI can do in the future a human + an AI can probably do better. Right now at least they're just tools we use and will remain so for the foreseeable future.
We don't even have a general outline of a theoretical approach to designing a general purpose intelligence, let alone implementing one. Until we do, any speculation about a time horizon for implementation is a pure guess. How are those guesses working out so far?<p>1960s Herbert Simmons predicts "Machines will be capable, within 20 years, of doing any work a man can do."<p>1993 - Vernor Vinge predicts super-intelligent AIs 'within 30 years'.<p>2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.<p>So the distance into the future before we achieve strong AI and hence the singularity is, according to it's most optimistic proponents, receding by more than 1 year per year.<p>I am not in any way denying the achievability of strong AI. I do believe it will happen. I just don't think we currently have any idea how or when. If pushed to it, I'd say probably more than another 100 years from now but I don't know how much more.
The key point is self-learning, or ability of AI to build AI that's better, if only a little.<p>This is different from, say, AlphaGo playing against itself to train its neural network - we want AI 1.0 to <i>write</i> AI 2.0, not just tweak some coefficients in 1.0.<p>At the moment all automatically generated code is <i>less</i> complex than source code of code generator itself. There can be more of it in terms of lines of code, but it's usually pretty repetitive.
If you read the research, there is lots of incremental progress being made. Mainly with pixels - classifying them into objects, matching object locations to text, attempting to predict future pixel values etc. But this stuff is very 'surface level', not even close to the way our brains effortlessly interpret light - classify objects, detect depth, account for lighting, complete objects we can't see, invoke feeling of the material we are looking at, invoke past memories, detect threats, and so on - every single millisecond.<p>This doesn't even begin to get into the core of AGI, which is the 'thinking' component. Given this amazing mass of data, how do we then make the machine work towards it's goals? Is this just a neural network? Is it a billion neural networks? Too many variables to tell.<p>And even then, if every action it takes is a reaction to the environment, does it then not have freewill? Do we have freewill? Is 'consciousness' somehow the key to freewill?<p>But anyway if you listen to Musk or Hawking, doomsday AI is just round the corner.
If anyone who thinks yes wants to bet $1000 I'll do 1:10 odds.<p><a href="https://longbets.org" rel="nofollow">https://longbets.org</a>
No. This [0] is 4-5 years old and I don't think much progress has been made in getting a computer to classify that image as 'funny' and explain why.
And if/when it could, I doubt we'd call it intelligent. And this is just computer vision, not mentioning other branches of AI.<p>[0] <a href="http://karpathy.github.io/2012/10/22/state-of-computer-vision/" rel="nofollow">http://karpathy.github.io/2012/10/22/state-of-computer-visio...</a>
My own personal pet theory (guaranteed right or your money back): We won't have AGI until we have something that can dream.<p>Will we get close in 2017? No. Not if my pet theory is right, and not if it's wrong.
No, I don't think so. We'll inch closer, but I doubt we're anywhere near AGI on the path of software and algorithms running on traditional networked computing architectures.<p>That isn't to say the resources don't exist to create AGI. It's possible they were available a long time ago. If you were to ask some omnipotent future superintelligence for a way humans could have bootstrapped AGI in the year 2005 using the available technology of the day, it could probably come up with an answer. Maybe even further back than that, or maybe even present day wouldn't suffice—who knows.<p>Trying to emulate biological architectures on silicon can be grossly inefficient, and may actually be harder from a design perspective. It is the attempt to formalize and adapt something created by an optimization process that spanned millions of years, a process that had zero regard for how easy its creation would be to understand or otherwise reverse engineer.<p>At the same time, algorithms vastly more efficient than the human brain's remain a possibility. They need not include the large amounts of evolutionary baggage that humans have.<p>Approaching AGI as a raw optimization problem may yield better results. However, not formally specifying or understanding the underlying mechanisms is a massive safety issue in the long run.<p>By the same token, ditching silicon entirely may be a vastly quicker path. Throwing ethics out the window and experimenting with large quantities of lab-grown neural tissue might be one way. Creating a synthetic biological computing substrate another. It's not hard to imagine something like copying human neural tissue's design, but using materials capable of latencies an order of magnitude lower, or significantly higher degrees of interconnectivity.<p>Looking at the problem from the perspective of strictly space, it's funny to think that we're unable to recreate the functionality of some tissue contained within a space that's less than one cubic foot-even though we have seemingly endless <i>acres</i> of computing power to do it with—that's excluding the brains of the thousands of scientists and engineers working on AI. Even if you stacked up <i>just</i> the microprocessors in question, they would occupy a cubic volume far, far greater than a single human brain—each containing billions of transistors, and each operating at latencies far lower than the brain. Despite all this, the human brain requires far lower amounts of energy.<p>The reason we don't have AGI yet is that it simply takes a lot of time and effort to invent, regardless if it's ultimately possible with today's technology. Of course, as other commenters have suggested, ruling out the possibility that the human brain somehow has seemingly magical quantum properties that render its recreation an impossibility (on silicon at least) may be unwise.
The term AGI suffers from a greatly exacerbated version of the same problem that AI suffers. The problem, mind you, has NOTHING to do with science or technology - it is purely a naming problem.<p>The term "Artificial Intelligence" is a contradiction - intelligence can NOT be artificial. Intelligence is the ability of a being to get what it wants. It is always organic, as it originates in desire.<p>Just stop calling it "Artificial Intelligence" and enjoy the wonderful progress that we are making towards getting our machines to help us achieve what we want.<p>(To be clear, I'm not saying stop calling it "artificial". I'm saying stop calling it "intelligence", because it is not, and never will be. Using the word "intelligence" in the context of machine automation sets entirely unreasonable expectations and inhibits progress. )