And no one should be surprised by this. The NN advancement of late doesn't help addressing human-style symbolic reasoning at all. All we have is a much more powerful function approximator with a drastic increased capacity (very deep networks with billions of parameters) and scalable training scheme (SGD and its variants).<p>Such architecture works great for differentiable data, such's images/audios, but the improvement on natural language tasks are only incremental.<p>I was thinking maybe DeepMind's RL+DL is the way leads to AGI, since it does offer an elegant and complete framework. But seems like even DeepMind had trouble to get it working to more realistic scenarios, so maybe our modelling of intelligence is still hopelessly romantic.
<a href="https://en.wikiquote.org/wiki/Incorrect_predictions" rel="nofollow">https://en.wikiquote.org/wiki/Incorrect_predictions</a><p>"Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years--provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. [Emphasis added.]
The New York Times, Oct 9, 1903, p. 6."<p>-----<p>A couple of the leading minds in AGI say it's a long ways away... just because the universe likes to give us the finger, maybe AGI is on the horizon. Maybe we'll look back at this in 10 years and laugh (if we're here).
Behind every successful neural network is a human brain. Neural networks are a tool, an advanced tool for sure, but still just a tool. If we are looking for AGI, and assuming the brain is an AGI, then there are still many differences to resolve. For example, back propagation has not been observed in nature. Nor has gradient descent. So the core mechanisms for learning in nature have still to reveal their secrets.
If you want AGI you need to give it a world to live in. The ecological component of perception is missing. Without full senses, a machine doesn't have a world to think generally about. It just has the narrow subdomain of inputs that it is able to process.<p>You could bet that AGI won't manifest until AI and robotics are properly fused. Cognition does not happen in a void. This image of a purely rational mind floating in an abyss is an outdated paradigm to which many in the AI community still cling. Instead, the body and environment become incorporated into the computation.
Tangential: This title is weird. As if no one but the top minds in AI didn't know this? This isn't big news to anyone who has done even just a modicum of AI research.
It bothers me that the qoutes in this article are all cut up, in some cases ending when a sentence clearly wasn't finished. It makes it hard to judge what they are really saying here, and I wish the full interview would be published.
I wonder to what extent the data being fed to these models are the issue. Or rather the problem is the systems that generate these data-sets and how representative of reality they are. If we make an app that involves humans and that data is used in a model - to what extent does user experience and other factors warp reality?<p>Maybe our existing methods are good enough given enough compute to reach AGI but our datasets are too low fidelity and non-representative of the problem space to reach desired results?
Not sure how I feel about this; for one, the Kurzweilian singularity which largely could be fueled by the advent of AGI is both exciting and yet also scary. The upside could forever change humanity as we know it; far increased longevity, the potential to create <i>anything</i> via a universal assembler[0], bringing everything feasible within the laws of physics to reality. Knowledge is the only limiting factor stopping us from doing anything which is physically possible in this universe; and in that light AGI could be an enlightenment.<p>On the other hand, the ubiquity of knowledge once it's available could lead any maniac to use it for the wrong purpose and wipe out humanity from their basement.<p>My feelings on the potential of AGI is therefore mixed. I for one have just found my particular niche in the workforce and am finally reaping the dividends from decades of hard work. Having AGI displace me and millions (or billions) of individuals is frightening and definitely keeps me on my toes.<p>Technology changes the world; my parents both worked for newspapers and talk endlessly about how the demise of their industry after the advent of the internet is so unfortunate. Luckily for them they are both at retirement age so their livelihood was not upset by displacement.<p>If AGI does become a thing it will be interesting to see how millenials and gen Z react to becoming irrelevant in what would have been the peak of their careers.<p>[0] <a href="https://en.wikipedia.org/wiki/Molecular_assembler" rel="nofollow">https://en.wikipedia.org/wiki/Molecular_assembler</a>
I have a small experiment to discover if AGI is already a solved puzzle.<p><a href="https://news.ycombinator.com/item?id=18720482" rel="nofollow">https://news.ycombinator.com/item?id=18720482</a>
Not to mention that we don't even know if general intelligence exists. All we know is <i>that</i> mental abilities tend to correlate, but not <i>why</i> they tend to correlate. And if you think about designing machines, in general, the idea of general intelligence is utterly ridiculous. Does a fast car have general speediness? Of course not, it has dozens or hundreds of discrete optimizations that all contribute in some degree to the car being faster.
Great interview with Hassabis from the BBC. It's meanderingly biographical, with insights about his path through internships, curiosity, startups, commitment, burnout, trusted team mates and eventual successes ...<p><a href="https://www.bbc.co.uk/sounds/play/p06qvj98" rel="nofollow">https://www.bbc.co.uk/sounds/play/p06qvj98</a>
Demis Hassabis (true) statements here would be much more credible if DeepMind wasn't currently making a mint by promoting AlphaZero to the masses as a "general purpose artificial intelligence system".<p>Don't believe me? Check out this series of marketing videos on YouTube by GM Matthew Sadler.<p>1. “Hi, I’m GM Matthew Sadler, and in this series of videos we’re taking a look at new games between <i>AlphaZero, DeepMind’s general purpose artificial intelligence system</i>, and Stockfish” (1)<p>2. “Hi, I’m GM Matthew Sadler, and welcome to this review of the World Champinship match between Magnus Carlsen and Fabiano Caruana. And it’s a review with a difference, because we are taking a look at the games together with <i>AlphaZero, DeepMind’s general purpose artificial intelligence system</i>...” (2)<p>3. “Hi, I’m GM Matthew Sadler, and in this video we’ll be taking a look at a game between <i>AlphaZero, DeepMind’s general purpose artificial intelligence system</i>, and Stockfish” (3)<p>I could go on, but you get my point. Search youtube for "Sadler DeepMind" and you'll see all the rest. This is a script.<p>But wait, you say, that's just some random unaffiliated independent grandmaster who just happens to be using an inaccurate script on his own, no DeepMind connection at all! And to that I would say, check out this same random GM being quoted directly on DeepMind's blog waxing eloquently and rapturously about AlphaZero's incredible qualities. (4)<p>Let's be clear. I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi. Nor do I have a problem with Demis Hassabis making headlines for stating the obvious about deep learning (that it's good at solving certain limited types of puzzles, but we are a long way from AGI, why is this controversial).<p>My problem is that Hassabis is speaking out of both sides of his mouth. Increasing DeepMind/Google's value by many millions with his marketing message, while acting like he's not doing that. It feels intellectually dishonest.<p>To solve this, all DeepMind needs to stop instructing its Grandmaster mouthpieces to refer to AlphaZero as a "general articial intelligence system". Let's see how long that takes.<p>(1) <a href="https://www.youtube.com/watch?v=2-wFUdvKTVQ&t=0m10s" rel="nofollow">https://www.youtube.com/watch?v=2-wFUdvKTVQ&t=0m10s</a>
(2) <a href="https://www.youtube.com/watch?v=X4T0_IoGQCE&t=0m05s" rel="nofollow">https://www.youtube.com/watch?v=X4T0_IoGQCE&t=0m05s</a>
(3) <a href="https://www.youtube.com/watch?v=jS26Ct34YrQ&t=0m05s" rel="nofollow">https://www.youtube.com/watch?v=jS26Ct34YrQ&t=0m05s</a>
(4) <a href="https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/" rel="nofollow">https://deepmind.com/blog/alphazero-shedding-new-light-grand...</a>
If AGI (an artificial human mind with direct access to computational power of classic computers and whole Internet of information) was possible then we would probably already be living in the Travelers TV show.
As I always ask regarding this sort of story, why do we believe human intelligence is computable? The only answer I've heard is the materialist presupposition and sneers at any other metaphysic as "magic," which is not exactly a valid form of argument.<p>As an alternative, the human mind could be some sort of halting oracle. That's a well defined entity in computer science which cannot be reduced to Turing computation, thus cannot be any sort of AI, since we cannot create any form of computation more powerful than a Turing machine. How have we ruled out that possibility? As far as I can tell, we have not ruled it out, nor even tried.
I'm not even convinced that a real AI is possible with conventional computer hardware or anything remotely similar to it. Not even considering software I get the impression there is a fundamental limitation of hardware.
I don't believe in the idea of AGI for Dreyfusard reasons, but it's possible that it could emerge from something completely different than deep learning.<p>For all we know, Isabelle and Coq could be speeding through the road to consciousness but we're busy having a blast doing Computer Vision pretending it's AI.
The computational power of the hardware is getting really close to what a human brain is capable of (on an exponential scale, anyway). If "nowhere close" means not in the next 5 years then sure.<p>Over the medium term I'm not sure AI researchers are the best people to ask. They are completely dependent on how much power the electrical engineers give them - I doubt there is a deeper understanding what a doubling or quadrupling of computer power will do than any programmer learning about neural networks.
I take huge offense to this article. They claim that when it comes to AGI, Hinton and Hassabis “know what they are talking about.” Nothing could be further from the truth. These are people who have narrow expertise in <i>one</i> framework of AI. AGI does not yet exist so they are not experts in it, in how long it will be, or how it will work. A layman is just as qualified to speculate about AGI as these people so I find it to be infinitely frustrating when condescending journalists talk down to the concerned layman. This irritates me because AI is a death scentance for humanity — its an incredibly serious problem.<p>As I have stated before, AI is the end for us. To put it simply, AI brings the world into a highly unstable configuration where the only likely outcome is the relegation of humans and their way of life. This is because of the fundamental changes imposed on the economics of life by the existence of AI.<p>Many people say that automation leads to new jobs, not a loss of jobs. Automation has never encroached on the sacred territory of sentience. It is a totally different ball game. It is stupid to compare the automation of a traffic light to that of the brain itself. It is a new phenomenon completely and requires a new, from-the-ground-up assessment. Reaching for the cookie-cutter “automation creates new jobs” simply doesn’t cut it.<p>The fact of the matter is that even if most of the world is able to harness AI to benefit our current way of life, at least one country won’t. And the country that increases efficiency by displacing human input will win every encounter of every kind that it has with any other country. And the pattern of human displacement will ratchet forward uncontrollably, spreading across the whole face of the earth like a virus. And when humans are no longer necessary they will no longer exist. Not in the way they do now. It’s so important to remember that this is a watershed moment — humans have never dealt with anything like this.<p>AI could come about tomorrow. The core algorithm for intelligence is probably a lot simpler than is thought. The computing power needed to develop and run AI is probably much lower than it is thought to to be. Just because DNNs are not good at this does not mean that something else won’t come out of left field, either from neurological research or pure AI research.<p>And as I have said before, the only way to ensure that human life continues as we know it is for AI to be banned. For all research and inquires to be made illegal. Some point out that this is difficult to do but like I said, there is no other way. I implore everyone who reads this to become involved in popular efforts to address the problem of AI.
If AGI is possible, it already happened. If even AI experts put it a 100-1000 years out, where some human monkeys banging on digital typewriters could eventually create it, then, in the vastness of space, time, military contracts, alien intelligences, and random Boltzmann brains, it must have been reality multiple times already.<p>If AGI is impossible, it will never happen. We already know that perfectly intelligent AGI's are not physically possible: Per DeepMind's foundational theoretical framework, optimal compression is non-computable, and besides that, it is not possible for an inference machine to know all of its universe (unless it is bigger than the universe by at least 1 bit, AKA it <i>is</i> the universe).<p>Remains being more intelligent than all of humanity. To accomplish that, by Shannon's own estimates, there is currently not enough information available in datasets and the internet. Chinese efforts to artificially increase the intelligence of babies is still in its infancy too (the substrate of AGI is irrelevant for computationalism, unless it absolutely needs to run on the IBM 5100).<p>So until that time travels, we will have to make due with being smarter than/indistinguishable from a human on all economic tasks. We're already there for some subset of humanity, you may even be a part of that subset, if you believed this post was written by a human.