TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Artificial Intelligence: The Revolution Hasn’t Happened Yet (2018)

74 pointsby okfineover 2 years ago

13 comments

Animatsover 2 years ago
This is a common sentiment, and pundits have been making similar remarks for decades. This author writes &quot;Sixty years later, however, high-level reasoning and thought remain elusive.&quot;<p>That&#x27;s the wrong problem with AI. The trouble with AI is that it still sucks at manipulation in unstructured situations and at &quot;common sense&quot;. Common sense can usefully be defined as getting through the next 30 seconds of life without a major screwup. At, at least, the competence level of the average squirrel. This is why robots are so limited.<p>If we could build a decent squirrel brain, something &quot;higher level&quot; could give it tasks to do. That would be enough to handle many basic jobs in unstructured spaces, such as store stocking, janitorial, and such. It&#x27;s not the &quot;high level reasoning&quot; that&#x27;s the problem. It&#x27;s the low-level stuff.<p>A squirrel has around 10 million neurons. Even if neurons are complicated [1], somebody ought to be able to build something with 10 million of them. Current hardware is easily up to the task.<p>The AI field is fundamentally missing something. I don&#x27;t know what it is. I took a few shots at this problem back in the 1990s and got nowhere. Others have beaten their head against the wall on this. The Rethink Robotics failure is a notable example.<p>The real surprise to me is how much progress has been made on vision without manipulation improving much. I&#x27;d expected that real-world object recognition would lead to much better manipulation, but it didn&#x27;t. Even Amazon warehouse bin-picking isn&#x27;t fully automated yet. Nor is phone manufacturing. Google had a big collection of robots trying to machine-learn basic manual tasks, and they failed at that.<p>That&#x27;s the real problem.<p>[1] <a href="https:&#x2F;&#x2F;www.sciencedirect.com&#x2F;science&#x2F;article&#x2F;pii&#x2F;S0896627321005018" rel="nofollow">https:&#x2F;&#x2F;www.sciencedirect.com&#x2F;science&#x2F;article&#x2F;pii&#x2F;S089662732...</a>
评论 #33369349 未加载
评论 #33368242 未加载
评论 #33368516 未加载
评论 #33369740 未加载
评论 #33369486 未加载
评论 #33368284 未加载
评论 #33369565 未加载
评论 #33369726 未加载
Barrin92over 2 years ago
<i>&gt;&quot;However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems&quot;</i><p>I&#x27;d go as far as saying that ML is now at a point where it&#x27;s basically a mirror image of GOFAI with the exact same issues. The old stumbling block was that symbolic solutions worked well until you ran into an edge case, everyone recognized that having to program every edge case in makes no sense.<p>The modern ML problem is that reasoning based on data works fine, unless you run into an edge case, then the solution is to provide a training example to fix that edge case. Unlike with GOFAI apparently though people haven&#x27;t noticed yet that this is the same old issue with one more level of indirection. When you get attacked in the forest by a guy in a clown costume with an axe you don&#x27;t need to add that as a training input first before you make a run for it.<p>There&#x27;s no agency, liveliness, autonomy or learning in a dynamic real-time way to any of the systems we have, they&#x27;re for the most part just static, &#x27;flat&#x27;, machines. Honestly rather than thinking of the current systems as intelligent agents they&#x27;re more like databases who happen to have natural language as a way to query them.
评论 #33368320 未加载
评论 #33368161 未加载
Galaxeblafferover 2 years ago
Randomly watched this yesterday <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hXgqik6HXc0&amp;ab_channel=LexFridman" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hXgqik6HXc0&amp;ab_channel=LexFr...</a> where Roger Penrose argues that we&#x27;re missing something fundamental about consciousness and his best bet is a structure called the microtubules. This talk reminded me of my own research into &quot;AI&quot; back in the 00&#x27;s and that it&#x27;s almost impossible to talk about AI since everybody has a different idea as to whay AI is, yes i know there&#x27;s a pretty good classification ANI, AGI, ASI but most people don&#x27;t know about this and think of AI as a machine that thinks like conscious human. I&#x27;d argue that we&#x27;ve solved or at least partly solved the part of AI that has to do with neural nets. We&#x27;re still some way off utilizing the full potential of neural nets since our hardware hasn&#x27;t quite reached the capability of emulating even the simplest of complex animals. The thing is that Neural nets are probably only part of intelligence and creating bigger and more complex neural nets probably wont result in what most people consider AI but i guess there&#x27;s still a chance it might. We might have to wait several years to find out since moors law is plateauing and neural chips are still in it&#x27;s infancy. My best guess is that we&#x27;ll solve &quot;Intelligence&quot; long before we solve consciousness and i think we&#x27;re actually quite far along here. The best theory of intelligence i&#x27;ve read so far is Jeff Hawkins 1000 Brain Theory and i&#x27;m really looking forward to see how far it can go. The problem with this theory is that it&#x27;s still missing the most critical component which is the illusive mechanism that binds all the &quot;Intelligent&quot; stuff together and i guess that might be hidden in the quantum nature of the microtubules but to solve that we kind of need a new component to our theory of Quantum Mechanics and Quantum Effects.<p>Sorry if i went a bit off topic, but just needed to get my thoughts since yesterday out my head.
评论 #33370503 未加载
评论 #33370501 未加载
评论 #33371005 未加载
jeffhwangover 2 years ago
I like how the author emphasizes IA — Intelligence Augmentation as a counterpoint to GOFAI. I’m less inspired by his vision of II (Intelligent Infrastructure); probably bc I’m concerned with the degree of surveillance we already have to live with.
oldandtiredover 2 years ago
The question to ask is whether or not any algorithmic system is capable of exceeding the programming on which it is based. This question applies to every kind of system we have developed over the years.<p>The other point to make is that we already build systems that can exceed their programming and they are called children.
evrydayhustlingover 2 years ago
This is one of my favorites. So much of industrial AI is about replacing labor (usually cheaper but lower quality). In a way, AGI is only slightly more ambitious. We should be setting higher goals for AI, including helping individuals be superhuman, and helping organizations coordinate betteele.
评论 #33367885 未加载
评论 #33369653 未加载
评论 #33370019 未加载
machina_ex_deusover 2 years ago
I suspect biological brains have a pretty groundbreaking hack to solve the long-term short term learning problem. Maybe involving sleep.<p>What I mean by that is that AIs, the way they are currently built, need to learn very slowly on short term inputs or they overfit. Whereas humans can learn something just by explanation short term and don&#x27;t have overfitting problems.<p>I suspect this is solved by sleep, and I haven&#x27;t seen AI with a similar mechanism.
评论 #33369606 未加载
评论 #33369989 未加载
评论 #33369342 未加载
sn41over 2 years ago
As a theory person who usually explains O notation using concrete numbers, the degree of the neural network in our brain is approx 7000. Taking approx 86 billion ~ 100 billion, this itself is a graph with approx 6x10^(14) edges - does AGI proponents really hope to be able to do this? I am genuinely curious to know : is there some simplifying assumption which makes things faster?
评论 #33368777 未加载
评论 #33368806 未加载
评论 #33370114 未加载
LarsDu88over 2 years ago
The hardware is now here but the algorithms are not. A crow knows not to land on sharp nails without ever having any experience stepping on one. Current architectures lack this basic intuition. Something is missing. Probably an internal world model or simulation
评论 #33368520 未加载
评论 #33368245 未加载
rbanffyover 2 years ago
Remember growth is exponential - we won&#x27;t recognize the next revolution because we&#x27;ll still be dealing with the fallout of the previous one. Or previous dozen.
评论 #33370937 未加载
ryemigieover 2 years ago
From reading these comments, I will say that people should try out GitHub Copilot. A.I. is a bit further one than people might think.
guestbestover 2 years ago
We already have artificial intelligence. It’s called children
评论 #33368333 未加载
Ilverinover 2 years ago
I think this essay includes a specific prediction, that human level ai is far away, that might be disproved this decade. If human level ai is close, focusing on some other kind of ai is more likely to be a waste of time.
评论 #33368342 未加载
评论 #33369357 未加载