TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why Deep Learning surprises me

69 pointsby thevivekpandeyover 7 years ago

16 comments

AndrewKemendoover 7 years ago
Most comments here are to the tune of &quot;Well DL is just a bunch of correlations and statistics, it&#x27;s not really understanding anything&quot;<p>Ok, well I can also say &quot;humans are just a bunch of chemical reactions and electrical signals.&quot;<p>The beauty of DL is in it&#x27;s simplicity and really we&#x27;re at the very starting point of seeing it work with extremely sparse networks (compared to biological intelligence). The fact that it works so well with such limited data in narrow domains should be energizing.
评论 #15109181 未加载
评论 #15106961 未加载
评论 #15107579 未加载
jeremynixonover 7 years ago
The mysticism around ‘Emergence’ is just a modeling error where people only abstract in one way (say, down to cells) and don’t include something important like the interaction between cells in their reductionist model of the system. It’s like creating a graph without the edges. And so when those effects have manifest consequences at a higher level, it feels like they appeared as if by magic.
评论 #15106655 未加载
bitLover 7 years ago
I think author is stretching arguments here a bit - DL is just partitioning space according to some pre-baked associations given to it during training; in this case it&#x27;s more like a non-linear optimization where we want to end up with N-million dimensional objects of certain shape obtained by optimizing some objective function allowing predicting similar associations. It doesn&#x27;t have much with the actual innate quality of understanding. Maybe reinforcement learning with deep learning together (DRL) can move us towards such a quality at least in a mechanical sense.
评论 #15106565 未加载
评论 #15106561 未加载
评论 #15106686 未加载
评论 #15106573 未加载
mathgeniusover 7 years ago
I don&#x27;t see why &quot;understanding&quot; is equivalent to mere pattern recognition. Even using this word &quot;recognition&quot;, what does that mean? It&#x27;s another word like &quot;understand&quot;. These algorithms are just pattern patterning. They don&#x27;t even know they are patterning, that is a meta-property assigned in (or by) a context.
评论 #15106711 未加载
评论 #15106810 未加载
iamleppertover 7 years ago
He&#x27;s making the age old mistake of conflating mapping input and outputs to intelligence.<p>Intelligence is not defined by the ability to recognize letters. Or play a game of Go.<p>Deep learning is a powerful tool for creating systems that have an ability to map inputs to outputs with very noisy, non-linear or complex data.<p>The mapping itself may be complex, but it&#x27;s not going about solving problems like a person would. It has no idea what letters are, and how they fit into its world. It has no concept of self, cannot contemplate its own existence -- and perhaps most important of all, has no free will.<p>The moment we have some kind of deep learning or AI that has free will and can express <i>interest</i> in something other than what it has been trained on, I would say we are closer to unraveling the mystery of consicenesss and human intellect.<p>Even babies are animals exhibit many forms of free will, decision making, and novel behavior that cannot be explained with our current observations of route deep learning techniques.
评论 #15107019 未加载
评论 #15107706 未加载
deafcalculusover 7 years ago
Consciousness is likely just a whole bunch of computation.<p>I suspect &quot;What is consciousness?&quot; will go the way of &quot;What is life?&quot;. We more or less understand things that make up a bacteria. Those components aren&#x27;t alive although the bacteria is. So, it&#x27;s just a matter of definition.
评论 #15106994 未加载
评论 #15106928 未加载
komaromyover 7 years ago
&gt; Computers understand things as well as us, perhaps better.<p>If this was limited to chess, I would unquestionably agree.<p>If it was limited to image recognition, I would tentatively agree, although things like [0] make me cautious (admittedly, that was from March, and I&#x27;m not familiar with progress since then).<p>However, the author seems to be generalizing beyond those two domains, to the limits of human understanding. That seems like a couple-orders-of-magnitude leap too far to me. For example, I don&#x27;t know of any autonomous system capable of understanding a short novel with simple language and writing a one-page summary of it, as might be expected of a human ten-year-old.<p>[0] <a href="https:&#x2F;&#x2F;twitter.com&#x2F;Meaningness&#x2F;status&#x2F;846478348947668992" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;Meaningness&#x2F;status&#x2F;846478348947668992</a>
评论 #15106527 未加载
评论 #15106502 未加载
jcofflandover 7 years ago
&gt; Now I find it hard to hold on to the belief that I understand what is &quot;A&quot; and what is &quot;B&quot;, while computer can only compute.<p>Humans being surprised by the computer should not be the yardstick for AI. A trained neural net can recognize the letter &quot;A&quot; and differentiate it from things that are not &quot;A&quot; but it does not know that &quot;A&quot; is part of the Latin alphabet and that there are other alphabets that form written human languages.<p>The day the computer spontaneously invents a new and usable alphabet without having been specifically designed to do so is the day I will concede we have hard AI. We have a long way to go. Until then it&#x27;s just a bunch of hotdog&#x2F;not hotdog classifiers.
评论 #15106865 未加载
评论 #15106817 未加载
评论 #15110700 未加载
评论 #15106832 未加载
评论 #15107355 未加载
mannykannotover 7 years ago
I have always believed that understanding is an emergent property of physical processes that could be modeled computationally, but I do not think deep learning has yet demonstrated that it has yet achieved it. Some of the evidence comes from the ways it fails, such as &#x27;recognizing&#x27; images that humans would understand are not what the systems think they are, and being confident in decisions that make no sense. These situations occur precisely because of a lack of understanding. I am open to the possibility that deep learning alone might achieve understanding, but I think it is more likely to succumb to the law of diminishing returns before it gets there.
inventthedayover 7 years ago
Actually, computers are conscious as well. Consciousness is simply a system of information that operates on a continuous sense&#x2F;plan&#x2F;act loop. You could argue that they are &quot;less&quot; conscious, but to say that they are unconscious is to make the same mistake as people have made for years by saying that computers cannot &quot;understand&quot; anything.<p>Some people push back on this by saying computers have no sense of self. Thats not true. Most computers do have internal state representations about themselves. Take a driverless car for example. When it does localization, it&#x27;s constantly referencing its own shape and speed and comparing it to the environment. That&#x27;s a sense of self.<p>Whatever philosophical barriers we place between ourselves and machines (and animals&#x2F;nature for that matter), one thing is for certain: they will eventually debunked.
评论 #15106700 未加载
评论 #15106679 未加载
AndrewOMartinover 7 years ago
Searle&#x27;s Chinese Room Argument was specifically aimed at people claiming an algorithm could understand something because of its behaviour.<p>It applies to Deep Learning as much as it does Schank and Ableson&#x27;s script understanding system.
评论 #15106704 未加载
评论 #15106660 未加载
tomxorover 7 years ago
Perhaps i&#x27;m arguing semantics and this is what the author means but... in your primitive mind, you are able to recognise something even if you have no idea what it is, you can learn to recognise.<p>The ability to introspect and analyse what makes that thing unique or understand what it&#x27;s purpose or origin is has everything to do with being sentient.<p>We might not know what exactly being sentient is but recognising an image is like lobotomising the brain to just be a visual cortex, it can match but the other networks that work in the abstract are not there.
freechover 7 years ago
<a href="http:&#x2F;&#x2F;lesswrong.com&#x2F;lw&#x2F;iv&#x2F;the_futility_of_emergence&#x2F;" rel="nofollow">http:&#x2F;&#x2F;lesswrong.com&#x2F;lw&#x2F;iv&#x2F;the_futility_of_emergence&#x2F;</a>
kumartanmayover 7 years ago
Isn&#x27;t human&#x27;s greatest power in ability to think and imagine. Even animals are conscious and understand their surroundings?
dna_polymeraseover 7 years ago
&gt; Given enough examples, computers can understand what is letter &quot;A&quot; and what is letter &quot;B&quot;.<p>Meh.<p>Given enough examples, computers now can distinguish letter A and B but distinguishing is not understanding. You could argue that after learning the Network just uses an instruction set and from the outside that may leave the impression of understanding but it really does not. Isn&#x27;t that basically the Chinese room thing?
评论 #15106636 未加载
singhamover 7 years ago
Daniel Dennett has been saying this for quite a while.