TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning

312 点作者 monsieurpng超过 7 年前

24 条评论

Animats超过 7 年前
Brittle and opaque are real problems. The brittleness seems to be associated with systems which put decision surfaces too close to points in some dimension. That&#x27;s what makes strange classifier errors possible.[1] (This is also why using raw machine learning for automatic driving is a terrible idea. It will do well, until it does something totally wrong for no clear reason.)<p>Opacity comes from what you get after training - a big matrix of weights. Now what? &quot;Deep Dream&quot; was an attempt to visualize what a neural net used for image classification was doing, by generating high-scoring images from the net. That helped some. Not enough.<p>The ceiling for machine learning may be in sight, though. Each generation of AI goes through this. There&#x27;s a new big idea, it works on some problems, enthusiasts are saying &quot;strong AI real soon now&quot;, and then it hits a ceiling. We&#x27;ve been through that with search, the General Problem Solver, perceptrons, hill-climbing, and expert systems. Each hit a ceiling after a few years. (I went through Stanford just as the expert system boom hit its rather low ceiling. Not a happy time there.)<p>The difference this time is that machine learning works well enough to power major industries. So AI now has products, people, and money. The previous generations of AI never got beyond a few small research groups in and around major universities. With more people working on it, the time to the next big idea should be shorter.<p>[1] <a href="https:&#x2F;&#x2F;blog.openai.com&#x2F;adversarial-example-research&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.openai.com&#x2F;adversarial-example-research&#x2F;</a>
评论 #16349998 未加载
评论 #16350438 未加载
评论 #16352187 未加载
评论 #16349616 未加载
评论 #16354275 未加载
ssivark超过 7 年前
This article is a little too glib in my opinion, preferring citations and statements to substance and explanations.<p>For a more cutting and insightful critique, watch Ali Rahimi&#x27;s short talk at NIPS 2017 (where he was presenting for a paper that won the &quot;Test of time&quot; award, for standing out in value a decade after publication). The standing ovation he received at the end indicate that his comments resonated with a significant fraction of the attendees.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Qi1Yry33TQE&amp;feature=youtu.be&amp;t=660" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Qi1Yry33TQE&amp;feature=youtu.be...</a><p>Here&#x27;s a teaser from the talk:<p>&quot;How many of you have devised a deep neural net from scratch, architecture and all, and trained it from the ground up, and when it didn&#x27;t work, felt bad about yourself, like you did something wrong? This happens to me about every three months, and let me tell you, I don&#x27;t think it&#x27;s you [...] I think it&#x27;s gradient descent&#x27;s fault. I&#x27;ll illustrate...&quot;<p>[Addendum]<p>Ben Recht and Ali Rahimi published an adendum to the talk, elaborating on the direction they envision -- <a href="http:&#x2F;&#x2F;www.argmin.net&#x2F;2017&#x2F;12&#x2F;11&#x2F;alchemy-addendum&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.argmin.net&#x2F;2017&#x2F;12&#x2F;11&#x2F;alchemy-addendum&#x2F;</a><p>Ali also has a post taking a stab at organizing some puzzling basic observations about deep learning, and motivating that with analogous historical progress in optics -- <a href="http:&#x2F;&#x2F;www.argmin.net&#x2F;2018&#x2F;01&#x2F;25&#x2F;optics&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.argmin.net&#x2F;2018&#x2F;01&#x2F;25&#x2F;optics&#x2F;</a><p>-----<p>PS: The first 11 minutes, on the idea of using random features (the main idea in the research he presented) are also interesting.
评论 #16350002 未加载
评论 #16349242 未加载
评论 #16350585 未加载
YeGoblynQueenne超过 7 年前
&gt;&gt; Google Translate is often almost as accurate as a human translator.<p>This is the kind of overhyped reporting of results highlighted by Douglas Hofstadter in his recent article about Google Translate:<p><i>I’ve recently seen bar graphs made by technophiles that claim to represent the “quality” of translations done by humans and by computers, and these graphs depict the latest translation engines as being within striking distance of human-level translation. To me, however, such quantification of the unquantifiable reeks of pseudoscience, or, if you prefer, of nerds trying to mathematize things whose intangible, subtle, artistic nature eludes them. To my mind, Google Translate’s output today ranges all the way from excellent to grotesque, but I can’t quantify my feelings about it.</i><p><a href="https:&#x2F;&#x2F;www.theatlantic.com&#x2F;technology&#x2F;archive&#x2F;2018&#x2F;01&#x2F;the-shallowness-of-google-translate&#x2F;551570&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.theatlantic.com&#x2F;technology&#x2F;archive&#x2F;2018&#x2F;01&#x2F;the-s...</a><p>It&#x27;s funny how the article above is claiming to speak of &quot;the downsides&quot; to deep learning, yet it spends a few paragraphs repeating the marketing pitch of Google, Amazon and Facebook, that their AI is now as good as humans in some tasks (limited as they may be) and all thanks to deep learning. To me that goes exactly counter to the article&#x27;s main point and makes me wonder, what the hell is the author trying to say- and do they even know what they&#x27;re talking about?
评论 #16350357 未加载
评论 #16350055 未加载
评论 #16349974 未加载
andbberger超过 7 年前
I&#x27;ve said it before and I&#x27;ll say it again. Machine learning is specifically <i>not</i> magic. It only works to the extent that we can build our own priors into the model.<p>A typical media story... deep learning really is great. It represents the first time we&#x27;ve really figured out how to do large-scale nonlinear regression. But it is certainly not a magic bullet. However moderate headlines don&#x27;t get as many hits as overhyped ones so every day we get another ridiculous article spouting nonsense...<p>Very tiresome. The truth is pretty interesting, can we talk about that instead?<p>For instance, how and why deep learning works at ALL is very much an open question. Consider - we&#x27;re taking an incredibly nonlinear, nonconvex optimization problem and optimizing in just about the dumbest way imaginable, first-order gradient descent. It is really amazing that this works as well as it does.<p>... Why does deeper work better than wider? It has been known for many years that a shallow net has equivalent expressivity to a deep one. So what gives? (actually some interesting work towards answering this question in recent years by Sohl-Dickenstein et. al)
评论 #16350721 未加载
评论 #16353402 未加载
visarga超过 7 年前
Humans, as opposed to deep learning, have embodiment. We can move about, push and prod, formulate ideas and test them in the world. A deep net can&#x27;t do any of that in the supervised learning setting. The only way to do that is inside an RL agent. The problem is that any of our RL agents so far need to run inside a simulated environment, which is orders of magnitude less complex than reality. So they can&#x27;t learn because they can&#x27;t explore like us.<p>The solution would be to improve embodiment for neural nets and to equip RL agents with internal world simulators (a world model) they could use to plan ahead. So we need simulation both outside and inside agents. Neural nets by themselves are not even the complete answer. But what is missing is not necessarily a new algorithm or data representation, it&#x27;s the whole world-agent complex.<p>Not to mention that a human alone is not much use - we need society and culture to unlock our potential. Before we knew the cause, we believed disease was caused by gods, and it took many deaths to unlock the mystery. We&#x27;re not perfect either, we just sit on top of the previous generations. Another advantage we have - we have a builtin reward system that guides learning, which was created by evolution. We have to create this reward system for RL agents from scratch.<p>In some special cases like board games, the board is a perfect simulation in itself (happens to be trivial, just observe the rules, play against a replica of yourself). In that case RL agents can reach superhuman intelligence, but that is mostly on account of having a perfect playground to test ideas in.<p>In the future simulation and RL will form the next step in AI. The current limiting factor block is not the net, but the simulator. I think everyone here has noticed the blooming of many game environments used for training RL agents from DeepMind, OpenAI, Atari, StarCraft, Dota2, GTA, MuJoCo and others. It&#x27;s a race to build the playground for the future intelligences.<p>Latest paper from DeepMind?<p>&gt; IMPALA: Scalable Distributed DeepRL in DMLab-30. DMLab-30 is a collection of new levels designed using our open source RL environment DeepMind Lab. These environments enable any DeepRL researcher to test systems on a large spectrum of interesting tasks either individually or in a multi-task setting.<p>Before we build an AI, we need to build a world for that AI to be in.
评论 #16350087 未加载
评论 #16350046 未加载
评论 #16359112 未加载
评论 #16350301 未加载
评论 #16350951 未加载
inthewoods超过 7 年前
There are groups and companies exploring probabilistic programming as an alternative to CNN and other deep learning techniques. Gamalon (www.gamalon.com) combines human and machine learning to provide more accurate results while requiring much, much less training data. The models it generates are also auditable and human readable&#x2F;editable - solving the &quot;opaque&quot; issue with deep learning techniques. Uber is exploring some of the same techniques with their Pyro framework.<p>Having said all of this, we&#x27;re not arguing that CNN have no place - in fact you can view CNNs as just a different type of program as part of an overall probabilistic programming framework.<p>What we&#x27;re seeing is that the requirement of large labeled training sets becomes a huge barriers as complexity scales - making understanding complex, multi-intent language challenging.<p>Disclosure: I work for Gamalon
评论 #16349488 未加载
评论 #16353513 未加载
评论 #16349475 未加载
评论 #16352626 未加载
henrik_w超过 7 年前
Another good article in a similar vein: &quot;Asking the Right Questions About AI&quot;<p><a href="https:&#x2F;&#x2F;medium.com&#x2F;@yonatanzunger&#x2F;asking-the-right-questions-about-ai-7ed2d9820c48" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@yonatanzunger&#x2F;asking-the-right-questions...</a><p>HN discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16286676" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16286676</a>
polotics超过 7 年前
I love the way the articles trots out the line that Google translate is almost as good as a human translator, in view of Hofsdadter&#x27;s recent article.
评论 #16349085 未加载
gaius超过 7 年前
Calling it &quot;deep learning&quot; was the first mistake. It makes it sound a lot more profound than it really is. &quot;Machine intuition&quot; is the term I prefer.
评论 #16349391 未加载
评论 #16350162 未加载
评论 #16350110 未加载
评论 #16349857 未加载
csours超过 7 年前
We had a presentation at work about AI and Deep Learning a while ago, and I asked what is the test approach or test plan for deep learning... the answer I got was a strange look.<p>If you have a self driving car crash and the cause is &quot;the algorithm&quot;, that&#x27;s not going to be satisfying to customers, insurance agencies, or regulators [I should be clear, the team giving the presentation does not work on SDCs].
John_KZ超过 7 年前
This article presents zero evidence or indications for their claim. One argument is persistence. Quoting: &quot; “We are born knowing there are causal relationships in the world, that wholes can be made of parts, and that the world consists of places and objects that persist in space and time,” Which is unsubstantiated and irrelevant, because AI that understands 3D is only now being developed. Also children up to 3 years of age or so, cannot understand the perspective of 3rd parties. There might be some hard-wired rules in our brain, but that&#x27;s not intelligence anyway.<p>The article has a point about something: Conventional feed-forward, convolutional neural networks can only model a very limited space in the grand scheme of things. Backpropagation is not perfect. There are other learning methods. Hell, sometimes there isn&#x27;t a global minimum anyway. But saying that Deep Learning will stall in the near future is just wrong, and in my opinion the reasons why are evident to all those who follow the latest developments.
dsign超过 7 年前
It&#x27;s the combined enthusiasm of academics and entrepreneurs the one driving the current revolution on artificial intelligence. Said enthusiasm is punctuated by high profile CEOs and investors making fiery remarks from time to time. But alas, we people are not enthusiastic forever about something, and sooner or later our collective psyche will move on.<p>That doesn&#x27;t mean that the technology revolution, and it&#x27;s AI component will stop. We have had machine learning for a long time, doing its thing, as best we managed to make it work. It&#x27;s impact inside our collective speech was more subdued, that for sure, but it was there. Research and development never stopped. Even if some big name university funded it less and shut up about building GAI, well there were still thousands of less shiny institutions and companies working in more tractable problems and building a foundation.<p>And, to be clear, I wish the current collective enthusiasm lasts a little bit longer, because we have a long way to go still and research grants and investment money flow better when the media is abuzz with the subject. In particular, we need to either move out or build upon matrix crunchers like deep learning. Better forms of AI and eventually GAI will need a little bit of innovation in chip-making and in computing architectures in general.
dmix超过 7 年前
&gt; Marcus believes that deep learning is not “a universal solvent, but one tool among many.” And without new approaches, Marcus worries that AI is rushing toward a wall, beyond which lie all the problems that pattern recognition cannot solve.<p>The thing that fascinates me is that we&#x27;ve only just scratched the surface of what today&#x27;s ML tech <i>can</i> solve. Let&#x27;s worry about that wall when we get there...<p>In the meantime let&#x27;s not lose sight of today&#x27;s potential in some misguided idealistic pursuit of perfectionism or &quot;general artificial intelligence&quot;.<p>There are countless problems which current deep learning research combined with some well-thought out UI&#x2F;UX could solve today in a myriad of industries.<p>The 1990&#x27;s software &#x27;revolution&#x27; in industry&#x2F;business was largely just formalization&#x2F;automation of paper-based processes into spreadsheets and simple databases, which then evolved into glorified CRUD&#x2F;CMS software interfaces on desktops, then web&#x2F;SaaS, and then another massively boost with smartphones.<p>If such a simple translation of human processes into machines can achieve trillions of dollars in value then there is no doubt machine learning can do the same for hundreds of thousands of other simple problem-sets which we haven&#x27;t even considered. Plus the desktop&#x2F;smartphone&#x2F;internet&#x2F;etc infrastructure is already in place for it to be plugged into.<p>This can only be negatively judged in the context of all significant steps forward in technology being oversold and misunderstood. But in practical real-world utility we&#x27;re very far from fully utilizing what has been researched and accomplished today in a small set of markets. And the proliferation of this tech should be encouraged, promoted, and accurately communicated to tech&#x2F;business talent who can potentially use it, rather than downplayed because it fails to live up to some media hyperbole or SciFi fantasies of where we <i>should</i> be in 2018.<p>The article mentions that taking AI&#x2F;ML&#x2F;data science courses has just become the &quot;hottest new field&quot; for young smart kids to join. Well that means we&#x27;re just on the cusp of taking advantage of that technological evolution and it&#x27;s FAR too early to look at what&#x27;s been accomplished today and be pessimistic about deep learnings potential.
mcguire超过 7 年前
I would argue that &#x27;greedy&#x27; applies to neutral networks in a different sense: they seize on their first solution; they have no mechanism to re-evaluate a decision that is proving false.
评论 #16352160 未加载
sixtypoundhound超过 7 年前
Meta Thought: Isn&#x27;t this basically describing the difference between human children and human adults? How effectively they can bridge known and unknown context?
fellellor超过 7 年前
The article seems a bit outdated in that Hinton himself has argued for the need to go beyond backpropagation.
评论 #16354190 未加载
bobthechef超过 7 年前
&gt; “We are born knowing there are causal relationships in the world, that wholes can be made of parts, and that the world consists of places and objects that persist in space and time,” he says.<p>Here&#x27;s a little too overconfident in the claim that these are innate ideas or something a priori.
jokoon超过 7 年前
I am still wondering, why is it not possible, or maybe too hard, to take a trained neural network and reduce its amount of neurons to get a simplified solution to a problem?<p>Is there some theoretical or mathematical analysis of neural networks?
vladislav超过 7 年前
If the hypothesis this article is aiming to strike down is that there is currently a clear path to solving AGI, then it&#x27;s simply fighting a potential misconception of the masses. The reality is that recent advances in AI, while not providing the full picture, are quite strong. Object recognition ten years ago, even at the level of 2011&#x27;s Imagenet results, seemed impossible. The downsides to deep learning mentioned in the article, are both clear side effects of the formulation, and simultaneously being addressed by the community in various ways. For what it&#x27;s worth, I&#x27;m with LeCun and Hinton on this one. The abstract human reasoning required for even high level perceptual tasks is difficult for humans to consciously dissect, but it could easily be that it&#x27;s actually relatively mundane when subjected to appropriate representations.
评论 #16350150 未加载
anonytrary超过 7 年前
Keep in mind, folks -- the following is also true:<p>&gt; Greedy, Brittle, Opaque, and Shallow: The Downsides to Evolved Human Intelligence<p>I wonder if we can have our cake and eat it too, in the realm of machine learning and artificial intelligence.
skybrian超过 7 年前
The problem I see with this article is that it starts with problems with deep learning today and extrapolates to say that researchers won&#x27;t get past them:<p>&quot;None of these shortcomings is likely to be solved soon.&quot;<p>There&#x27;s no evidence for this, and I think it underestimates the large and well-funded machine learning community. Maybe they&#x27;ll run into a brick wall, but who&#x27;s to say a bunch of smart researchers won&#x27;t figure out ways around today&#x27;s limitations?<p>Without physical constraints or impossibility results, I think the only rational outside view is that this is unpredictable. For any of today&#x27;s pressing problems with machine learning, maybe someone will post a new research paper tomorrow with a different approach that solves it. Or maybe not?
评论 #16350205 未加载
评论 #16350689 未加载
zby超过 7 年前
If it is shallow than maybe it should not be called deep? :)
评论 #16354270 未加载
评论 #16349582 未加载
marmaduke超过 7 年前
Those four words convey potential problems in any model or method, which an engineer would want to address.
tanilama超过 7 年前
Nothing new to see here. Just a recycle of Gary&#x27;s Marcus&#x27;s recent criticism of deep learning.