TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Difference Between AI, Machine Learning, and Deep Learning

288 pointsby kerckeralmost 9 years ago

14 comments

kttaalmost 9 years ago
Also, an interesting read related to this would &#x27;the AI effect&#x27;[0]. A lot of the stuff Deep Learning&#x2F;Machine Learning is able to do today would be looked at as something that only &#x27;true&#x27; AI (whatever consensus on what that means is; I think of it as AGI) would be able to do.<p>But as soon as we are able to solve a problem that we think (feel?) only true AI (AGI) would be able to solve, as soon as we know <i>how</i> it was solved, it is no longer a mystery that warrants amazement and we argue that it is not <i>real</i> intelligence, just like the link below states.<p>But just a decade ago, if you saw that a computer was able to recognize pictures <i>better</i> than humans, you would think there was something fishy going on, or we have achieved true AI.<p>My opinion is true AI is which is comparable to human intelligence, in that it is sentient and&#x2F;or capable of abstract thought, not necessarily being able to hold a conversation or solve concrete mathematics problems, or dump out a story by neural networks.<p>[0]<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AI_effect" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AI_effect</a>
评论 #12318462 未加载
评论 #12318267 未加载
评论 #12318677 未加载
评论 #12320568 未加载
评论 #12318688 未加载
评论 #12320787 未加载
评论 #12319916 未加载
评论 #12321782 未加载
oliwawalmost 9 years ago
Deep learning is just a rebranding of &quot;neural networks&quot;. When neural nets became unpopular in the 90s and early 2000s, people talked about &quot;multilayer networks&quot; (dropping the &quot;neural&quot;) since it wasn&#x27;t really useful to think about this approach from the neuro perspective (since it&#x27;s such a cartoonish model of real neural networks anyway).<p>Now that very deep networks have become possible, and various graphical models and Bayesian approaches have also been folded under &quot;deep learning&quot; (for example, using back-propagation to learn complicated posterior distributions in variational Bayes) deep learning is not just about vanilla feedforward nets.
评论 #12318130 未加载
评论 #12318729 未加载
评论 #12318511 未加载
thallukrishalmost 9 years ago
If I need to show 1 million cat images to train a neural net to see cats, I wonder how a human brain can figure out cats of any kind just by seeing one or two.<p>Is there something fundamental we are missing in going about building these deep learning stuff ?
评论 #12318638 未加载
评论 #12318653 未加载
评论 #12319138 未加载
评论 #12319019 未加载
评论 #12318815 未加载
评论 #12318706 未加载
评论 #12318740 未加载
harlowjaalmost 9 years ago
Has anyone figured out how to debug and&#x2F;or analyze deep ANN?<p>That&#x27;s the problem I always had, you may get them into a trained state, but good luck figuring out any reason &#x27;why&#x27; they ended up in that state (or even what that state really is).
评论 #12317908 未加载
评论 #12318754 未加载
评论 #12318059 未加载
评论 #12318590 未加载
paulsutteralmost 9 years ago
I disagree. Machine learning is more general than AI and therefore should be the outer circle (unless you believe the meme &quot;did a regression - called it AI&quot;)<p>Deep learning is, yes, a subcase of machine learning, and AI may be a circle within machine learning and enclose deep learning.<p>But truth be told, we will all regret the way we use the term AI now. Eventually the term AI will refer only to general intelligence (aka AGI).
评论 #12318284 未加载
评论 #12317968 未加载
评论 #12318817 未加载
评论 #12318009 未加载
评论 #12318068 未加载
threepipeproblmalmost 9 years ago
I recently learned that Alexander Stepanov, designer of the C++ STL, has stated that, &quot;I think that object orientedness is almost as much of a hoax as Artificial Intelligence.&quot;
bpesquetalmost 9 years ago
Great writeup about how deep learning came to be: <a href="http:&#x2F;&#x2F;www.andreykurenkov.com&#x2F;writing&#x2F;a-brief-history-of-neural-nets-and-deep-learning&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.andreykurenkov.com&#x2F;writing&#x2F;a-brief-history-of-neu...</a> (posted here a few months ago).
stefanvalmost 9 years ago
There is also machine intelligence: <a href="http:&#x2F;&#x2F;numenta.com&#x2F;blog&#x2F;machine-intelligence-machine-learning-deep-learning-artificial-intelligence.html" rel="nofollow">http:&#x2F;&#x2F;numenta.com&#x2F;blog&#x2F;machine-intelligence-machine-learnin...</a>
danvaynalmost 9 years ago
Why the hell is the text not black? It&#x27;s very awful to read on mobile..
ragebolalmost 9 years ago
For machine learning and deep learning based assistive functions, like Google Now, Cortana and Siri, we let the companies behind them gather a lot of our personal data into &quot;Big Data&quot;, because then you can do all the statistics.<p>I think a (more) true AI would not need all this data and just be a _personal_ assistant, not needing all the big data of other people too. Maybe initially, once, and then be a good personal assistant, learning to know you like a human personal assistant.<p>It would need some common sense, which is now lacking, mostly.
bogomipzalmost 9 years ago
I was confused by this sentence in the last paragraph:<p>&quot;Deep Learning has enabled many practical applications of machine learning and by extension the overall field of AI.&quot;<p>Is it not the reverse - machine learning has enabled deep learning?<p>Can someone comment on how the two - machine learning and Deep learning relate? Is the relationship sequential i.e a data set from machine learning is the the input for a neural network? The diagram had the effect of confusing me.
评论 #12320101 未加载
unabstalmost 9 years ago
There&#x27;s that one iconic image of neurons suspended in space with bolts of electricity flashing between them. We are told that&#x27;s how our brains work.<p>We are then shown a diagram by a computer scientist. Instead of cells and thunder, we see circles and arrows. Then we are told there is an algorithm that simulates what the brain does. Viola, we have our artificial neural network. Not only do they look similar, they have two words in common, neural and network!<p>And so for most of us, there is only one logical conclusion: It does what our brain does, so once our computers have the power our brains do, we&#x27;ll have the singularity!<p>Of course, now we know this is complete bullshit.<p>Basically, computer scientists just took the names and those initial abstractions and ran with it. They never looked back at the biology or how brains actually work. The result is a ton of great research, but they&#x27;ve strayed further and further from neuroscience and from humans. Which is obvious, because they&#x27;re staring at code and computers all day, not brain meat. If there is one thing AlphaGo proved it is that we&#x27;ve made a ton of progress in computation, but that it&#x27;s a different direction. Just the fact that average people generally suck at Go should be enough to show that AlphaGo is not human (in many ways it&#x27;s beyond human).<p>In the meantime, our neuroscientist have made progress also, except, they&#x27;ve done it staring at the actual brain. And now it&#x27;s to the point where our brains look nothing like that original image our computer scientists were inspired with.<p>Now there is this (Harvard research): <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=8YM7-Od9Wr8" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=8YM7-Od9Wr8</a><p>And this (MIT research): <a href="https:&#x2F;&#x2F;www.ted.com&#x2F;talks&#x2F;sebastian_seung?language=en" rel="nofollow">https:&#x2F;&#x2F;www.ted.com&#x2F;talks&#x2F;sebastian_seung?language=en</a><p>With advancement comes new vocabulary, and the new word this time is connectome.<p>Some incredibly smart computer scientists will, again, take the term and all the diagrams, and start programming based on it. The result will be Artifical Connectomes, and they will blow our socks off. Now, don&#x27;t get me wrong. I am not trying to be sarcastic here. This is what _should_ happen. And with every iteration, we will get closer to AGI.<p>It&#x27;s just that whenever I see articles about machine learning and neural networks, I can&#x27;t help but think of that classic artist&#x27;s rendition of neurons firing, and how it&#x27;s basically complete bullshit. Like Bohr&#x27;s atom, it&#x27;s an illustration based on a theory, not reality. Now we have wave function diagrams and connectomes. But as a physicist would tell you, anyone caught with a Bohr&#x27;s atom is stuck in the 20th century.
评论 #12319271 未加载
vonnikalmost 9 years ago
This intro is pretty good, but may be a bit high level for some readers of HN. It seems like it&#x27;s written for non-technical readers. The idea is that AI, ML and DL are nested like Russian dolls, each subsequent one a subset of the other.<p>It might be better to explain <i>why</i> deep learning is so effective, in clear language:<p>* Deep artificial neural networks are old, relatively simple combinations of math and code that are now able to produce accurate models through brute force because we have<p>1) vastly more computational power thanks to NVIDIA and distributed run-times; 2) much more data, and much larger labeled datasets thanks to people like Fei-Fei Li at Stanford; 3) better algorithms thanks to the work of Hinton, LeCun, Bengio, Ng, Schmidhuber and a raft of others.<p><i>Deep</i> is a technical term. It refers to the number of layers through which data passes in a neural net; that is, the number of mathematical operations it is subjected to, and the number of times it is recombined with other inputs.<p>This recombination of inputs, moving deeper into the net, is the basis of feature hierarchy, which is another way of saying: we can cluster and classify data using more complex and abstract representations.<p>That clustering and classification is at the heart of what deep learning does. Another way to think about it is as <i>machine perception</i>. So the overarching narrative in AI is that we&#x27;ve moved from the symbolic rules engines of the chess victors to the interpretation of complex sensory information. For a long time, people would say AI could beat a 30-year-old at chess but couldn&#x27;t beat a 3-year old at basic tasks. That&#x27;s no longer true. We can go around beating 3-year-olds at name games all day. AI mind, beginner&#x27;s mind.<p>But it&#x27;s important to note that deep learning actually refers to other algorithms besides artificial neural networks. Deep reinforcement learning is one example. RL is also an old set of algorithms, which are goal-oriented. RL helps <i>agents</i> choose the right <i>action</i> in a given <i>state</i> to maximize <i>rewards</i> from the environment. Basically, they learn the function that converts actions to rewards given certain conditions, and that function is non-differentiable; that is, you can&#x27;t learn it simply by backpropagating error, the way neural nets do.<p>Deep RL is important because the most amazing algorithms, like AlphaGo, are combining deep neural nets (recognize the state of the Go board) with RL (pick the move most likely to succeed) and other components like Monte Carlo Decision Trees (limit the state space we explore).<p>So we&#x27;re moving beyond perception to algorithms that can make strategic decisions in increasingly complex environments.<p>We&#x27;ve written more about this, and implemented many of these algorithms:<p><a href="http:&#x2F;&#x2F;deeplearning4j.org&#x2F;ai-machinelearning-deeplearning.html" rel="nofollow">http:&#x2F;&#x2F;deeplearning4j.org&#x2F;ai-machinelearning-deeplearning.ht...</a> <a href="http:&#x2F;&#x2F;deeplearning4j.org&#x2F;reinforcementlearning.html" rel="nofollow">http:&#x2F;&#x2F;deeplearning4j.org&#x2F;reinforcementlearning.html</a> <a href="http:&#x2F;&#x2F;github.com&#x2F;deeplearning4j&#x2F;rl4j" rel="nofollow">http:&#x2F;&#x2F;github.com&#x2F;deeplearning4j&#x2F;rl4j</a>
Azuolasalmost 9 years ago
this is a very interesting read. Thank you.