TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Escaping the Local Minimum: Where AI Has Been and Where It Needs to Go

96 pointsby kennethfriedmanabout 9 years ago

6 comments

Animatsabout 9 years ago
The big difference this time is that AI makes money. This matters. The first two AI booms never made it to profitability, or produced much usable technology. This time, there are applications. As a result, far more people are involved. This AI boom is at least three orders of magnitude bigger than the first two.<p>I&#x27;ve had similar criticisms to the parent author for years, but I thought of it as a hubris problem. In each AI boom, there was a good idea, which promoter types then blew up into Strong AI Real Soon Now. The arrogance level of the first two AI booms was way out of line with the results achieved. This time, it&#x27;s more about making money, and much of the stuff actually works. Machine learning may hit a wall too, but it&#x27;s useful.<p>The field isn&#x27;t going to get trapped in a local minimum with neural nets because the field is too big now. When AI was 20 people each at Stanford, MIT, and CMU, that could happen. With 50,000 people taking machine learning courses, there are enough people for some to focus on optimizing existing technologies without taking away from new ideas.<p>We&#x27;re going to get automatic driving pretty soon. That&#x27;s working now, with cars on the road from about a half dozen groups. Not much question about that.<p>The author rehashes symbolic systems and natural language understanding as areas of recommended work. This may or may not be correct. Time will tell. He omits, though, the &quot;common sense&quot; problem. There&#x27;s been work on common sense, but mostly as a symbolic or linguistic problem. Yet the systems that really need common sense are the ones that operate in the real, physical world. What happens next? What could go wrong? What if this is tried? That&#x27;s what Google&#x27;s self driving car project is trying to deal with. Unfortunately, Google doesn&#x27;t say much about how they do this. That project, though, is really working on common sense.<p>Incidentally, Danny Hillis did not found Symbolics. He founded Thinking Machines, which built the Connection Machine, a big SIMD (single instruction, multiple datastream) computer with 1024 dumb processors each executing the same instruction on different data.
评论 #11804953 未加载
评论 #11804524 未加载
评论 #11806715 未加载
评论 #11804488 未加载
visargaabout 9 years ago
What I knew was that local minima are not that problematic when the state space is highly dimensional. Only saddle points appear in high dimensional spaces. The 2D example is not realistic.<p>Here is a quote from a paper I randomly sampled on arxiv:<p>&gt; For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers).<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1605.07110v1" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1605.07110v1</a><p>Also, if your critique is related to the perceived lacks of backpropagation, keep in mind than reinforcement learning is also a kind of backpropagation of a reward, but this time the reward is much sparser and low dimensional. Thus, they are somewhere in-between supervised and unsupervised learning, not quite enjoying the full supervision of backpropagating at every example, but still learning based on an external critic.<p>The way forward is to implement reinforcement learning agents with memory and attention. These systems are neural turing machines, they can compute in a sequence of steps.
评论 #11804721 未加载
评论 #11806169 未加载
nlabout 9 years ago
Have you read Pedro Domingos&#x27;s[1] &quot;The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World&quot;?<p>You should. It directly addresses the idea of blending different fields of AI.<p>[1] <a href="http:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~pedrod&#x2F;" rel="nofollow">http:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~pedrod&#x2F;</a>
评论 #11805458 未加载
评论 #11805852 未加载
PaulHouleabout 9 years ago
Rule based systems have to get ergonomic; intelligent systems by definition are not stupid -- I.e. if the behavior of the system is unacceptable you need to be able to patch it quickly not add another 150 million training examples.<p>The #1 threat to a.I. right now is kaggleism, that is, training data is more valuable than talent, algorithms and all that.
munawwarabout 9 years ago
I&#x27;d say that speeding up learning times for image related &quot;AI&quot; technology are important as well. Think of it..my nephew (1.5 year old) sees a cat just two times and is able to identify cats, whereas these neural nets needs huge training sets + performant machines and gpus.
评论 #11825474 未加载
sushirainabout 9 years ago
Local minimum? In the next 10 years many deep learning applications will materialize: speech recognition will reach human level in production, autonomous trucks on highways, autonomous cars for consumers, AR, personal robot cleaners. The following 10 years will not fail us, too.
评论 #11809078 未加载