This is a common sentiment, and pundits have been making similar remarks for decades. This author writes "Sixty years later, however, high-level reasoning and thought remain elusive."<p>That's the wrong problem with AI. The trouble with AI is that it still sucks at manipulation in unstructured situations and at "common sense". Common sense can usefully be defined as getting through the next 30 seconds of life without a major screwup. At, at least, the competence level of the average squirrel. This is why robots are so limited.<p>If we could build a decent squirrel brain, something "higher level" could give it tasks to do. That would be enough to handle many basic jobs in unstructured spaces, such as store stocking, janitorial, and such. It's not the "high level reasoning" that's the problem. It's the low-level stuff.<p>A squirrel has around 10 million neurons. Even if neurons are complicated [1], somebody ought to be able to build something with 10 million of them. Current hardware is easily up to the task.<p>The AI field is fundamentally missing something. I don't know what it is. I took a few shots at this problem back in the 1990s and got nowhere. Others have beaten their head against the wall on this. The Rethink Robotics failure is a notable example.<p>The real surprise to me is how much progress has been made on vision without manipulation improving much. I'd expected that real-world object recognition would lead to much better manipulation, but it didn't. Even Amazon warehouse bin-picking isn't fully automated yet. Nor is phone manufacturing. Google had a big collection of robots trying to machine-learn basic manual tasks, and they failed at that.<p>That's the real problem.<p>[1] <a href="https://www.sciencedirect.com/science/article/pii/S0896627321005018" rel="nofollow">https://www.sciencedirect.com/science/article/pii/S089662732...</a>
<i>>"However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems"</i><p>I'd go as far as saying that ML is now at a point where it's basically a mirror image of GOFAI with the exact same issues. The old stumbling block was that symbolic solutions worked well until you ran into an edge case, everyone recognized that having to program every edge case in makes no sense.<p>The modern ML problem is that reasoning based on data works fine, unless you run into an edge case, then the solution is to provide a training example to fix that edge case. Unlike with GOFAI apparently though people haven't noticed yet that this is the same old issue with one more level of indirection. When you get attacked in the forest by a guy in a clown costume with an axe you don't need to add that as a training input first before you make a run for it.<p>There's no agency, liveliness, autonomy or learning in a dynamic real-time way to any of the systems we have, they're for the most part just static, 'flat', machines. Honestly rather than thinking of the current systems as intelligent agents they're more like databases who happen to have natural language as a way to query them.
Randomly watched this yesterday <a href="https://www.youtube.com/watch?v=hXgqik6HXc0&ab_channel=LexFridman" rel="nofollow">https://www.youtube.com/watch?v=hXgqik6HXc0&ab_channel=LexFr...</a> where Roger Penrose argues that we're missing something fundamental about consciousness and his best bet is a structure called the microtubules. This talk reminded me of my own research into "AI" back in the 00's and that it's almost impossible to talk about AI since everybody has a different idea as to whay AI is, yes i know there's a pretty good classification ANI, AGI, ASI but most people don't know about this and think of AI as a machine that thinks like conscious human. I'd argue that we've solved or at least partly solved the part of AI that has to do with neural nets. We're still some way off utilizing the full potential of neural nets since our hardware hasn't quite reached the capability of emulating even the simplest of complex animals. The thing is that Neural nets are probably only part of intelligence and creating bigger and more complex neural nets probably wont result in what most people consider AI but i guess there's still a chance it might. We might have to wait several years to find out since moors law is plateauing and neural chips are still in it's infancy. My best guess is that we'll solve "Intelligence" long before we solve consciousness and i think we're actually quite far along here. The best theory of intelligence i've read so far is Jeff Hawkins 1000 Brain Theory and i'm really looking forward to see how far it can go. The problem with this theory is that it's still missing the most critical component which is the illusive mechanism that binds all the "Intelligent" stuff together and i guess that might be hidden in the quantum nature of the microtubules but to solve that we kind of need a new component to our theory of Quantum Mechanics and Quantum Effects.<p>Sorry if i went a bit off topic, but just needed to get my thoughts since yesterday out my head.
I like how the author emphasizes IA — Intelligence Augmentation as a counterpoint to GOFAI. I’m less inspired by his vision of II (Intelligent Infrastructure); probably bc I’m concerned with the degree of surveillance we already have to live with.
The question to ask is whether or not any algorithmic system is capable of exceeding the programming on which it is based. This question applies to every kind of system we have developed over the years.<p>The other point to make is that we already build systems that can exceed their programming and they are called children.
This is one of my favorites. So much of industrial AI is about replacing labor (usually cheaper but lower quality). In a way, AGI is only slightly more ambitious. We should be setting higher goals for AI, including helping individuals be superhuman, and helping organizations coordinate betteele.
I suspect biological brains have a pretty groundbreaking hack to solve the long-term short term learning problem. Maybe involving sleep.<p>What I mean by that is that AIs, the way they are currently built, need to learn very slowly on short term inputs or they overfit. Whereas humans can learn something just by explanation short term and don't have overfitting problems.<p>I suspect this is solved by sleep, and I haven't seen AI with a similar mechanism.
As a theory person who usually explains O notation using concrete numbers, the degree of the neural network in our brain is approx 7000. Taking approx 86 billion ~ 100 billion, this itself is a graph with approx 6x10^(14) edges - does AGI proponents really hope to be able to do this? I am genuinely curious to know : is there some simplifying assumption which makes things faster?
The hardware is now here but the algorithms are not. A crow knows not to land on sharp nails without ever having any experience stepping on one. Current architectures lack this basic intuition. Something is missing. Probably an internal world model or simulation
Remember growth is exponential - we won't recognize the next revolution because we'll still be dealing with the fallout of the previous one. Or previous dozen.
I think this essay includes a specific prediction, that human level ai is far away, that might be disproved this decade. If human level ai is close, focusing on some other kind of ai is more likely to be a waste of time.