>For more than 30 years, Geoffrey Hinton hovered at the edges of artificial intelligence research, an outsider clinging to a simple proposition: that computers could think like humans do—using intuition rather than rules.<p>This is so disrespectful to the 1000s of researchers who have been studying machine learning since well before 2012. It was well established that the future of teaching computers came from statistics and not rules in the 80s/90s by researchers like Michael Jordan (<a href="https://en.wikipedia.org/wiki/Michael_I._Jordan" rel="nofollow">https://en.wikipedia.org/wiki/Michael_I._Jordan</a>) and his students.<p>It was even engrained in popular culture: neural networks are how the AI brain worked in Terminator 2, in 1991! <a href="https://www.youtube.com/watch?v=xcgVztdMrX4" rel="nofollow">https://www.youtube.com/watch?v=xcgVztdMrX4</a><p>edit: I don't want to downplay Hinton's accomplishments, I've been lucky to have been surrounded by and motivated by his work since I started learning machine learning. I did my masters research on neural networks that were partly inspired by his work, and it was a deep networks paper he presented at a NIPS 2006 workshop that got me really excited to stay in machine learning while I was starting my career.
The article is misleading if not false. Neural nets were hot in academic AI research 30 years ago (1988). The original Perceptron had fallen out of favor in part because of arguments that it could not implement a exclusive or (XOR) in Minsky and Papert's book Perceptrons<p><a href="https://en.wikipedia.org/wiki/Perceptrons_(book)" rel="nofollow">https://en.wikipedia.org/wiki/Perceptrons_(book)</a><p>Neural nets fell out of favor in the 1970's but came back and became hot in the early 1980's with work by John Hopfield and others that addressed the objections.<p><a href="https://en.wikipedia.org/wiki/John_Hopfield" rel="nofollow">https://en.wikipedia.org/wiki/John_Hopfield</a><p>Practical and commercial successes were limited in the 1980's and 1990's which led to a reasonable decline in interest in the method. There were some commercial successes such as HNC Software which used neural nets for credit scoring and was acquired by Fair Isaac Corporation (FICO).<p><a href="https://en.wikipedia.org/wiki/Robert_Hecht-Nielsen" rel="nofollow">https://en.wikipedia.org/wiki/Robert_Hecht-Nielsen</a><p>I turned down a job offer from HNC in late 1992 and neural nets were still clearly hot at that time.<p>Some people continued to use neural nets with some limited success in the late 1990's and 2000s. I saw some successes using neural nets to locate faces in images, for example. Mostly they failed.<p>AI research is very faddish with periods of extreme optimism about a technique followed by disillusionment. One may wonder how much of the current Machine Learning/Deep Learning hype will prove exaggerated.<p>Also, traditional Hidden Markov Model (HMM) speech recognition is not rule based at all. It uses a maximum likelihood based extremely complex statistical model of speech.
Most other scientists dismissed neural networks?
Is there some history I am unaware of as that doesn't seem true. Did the article want to push the idea of the lone rogue thinker a bit too much?
I wonder when they'll write this about Michael Jordan. "History doesn't repeat itself, but it often rhymes"<p>Probably they'll never mention Friedman and Breiman, which seems pretty unfair considering their gizmos have arguably had a bigger impact in "actual machine learning gizmos deployed..."
I see this article as an opportunity to know a little bit more about Hinton's personal life and personality. It's not a neural nets article, we we shouldn't dig too deep into the controversy about who invented what and if they were alone or not.
This is an odd story that seems to gloss over the downsides of neural networks. Computational power needed to build some of the models is enough to explain how slow uptake was. At least in large. I would be interested to see just how many multiplications go into a typical model nowadays. In particular, the training of one.<p>But that still skirts the big issue, which is generalization. We are moving, it seems, to transfer learning. The danger is that we don't seem to have a good theory to why it works. At a practitioner level, I don't think this is as much of a problem. For the research, though, it is pretty shaky.<p>I think there is more than a strong chance this remains the future for a while. And I am layman in this field. At best. But this story presupposes that the past was wrong for not being like the present. That is a tough bar.
“I cannot imagine how a woman with children can have an academic career. ...." This is the real truth and reality for anyone actively parenting and trying to deeply understand and research anything ...Grateful that this article chose to include the quote
There weren't many, but there was a strong contingent of neural network researchers going strong since the time I was in high school in the early 80s. Jerome Feldman (University of Rochester) was a neighbor. James McClelland (Department of Psychology) was a mentor of mine in the mid-80s. This field was far from ignored. We used different names (connectionism, backpropagation) and most importantly we had computers that were tens of thousands of times less capable than what is available today.
his model for contrastive divergent learning pre 2000 iirc was what really set the base for his breakthrough in the mid 2000. I think it took him sometime to make the jump from contrastive divergence learning to RBMs that learnt good priors for deeper layers...
he was well known name in the 80s and back again with RBMs in the 00s <a href="https://www.youtube.com/watch?v=AyzOUbkUf3M" rel="nofollow">https://www.youtube.com/watch?v=AyzOUbkUf3M</a> . He and Sejnowski are some of the few names i remember when i took an NN class a long time ago. He was insistent on working on it when many others saw it as a peripheral curiosity to their career.<p>what's with everyone here?
There is something in Canada because the best book (I tried to understand) about neural networks back in the 90s was "Neural Networks: A Comprehensive Foundation" by Simon Hayking [1]<p>[1] <a href="https://en.wikipedia.org/wiki/Simon_Haykin" rel="nofollow">https://en.wikipedia.org/wiki/Simon_Haykin</a>
This was a very interesting article, but as a juggler, the most interesting thing to me was how he learned to juggle grapes with his mouth. I need to run to the store to pick up some grapes now!
<p><pre><code> "an outsider clinging to a simple proposition: that computers could think like humans do—using intuition rather than rules. "
</code></pre>
I stopped reading right there.
Despite hype and success, Machine/Deep Learning have their own limitations, which is a generally admitted fact.<p>At a fundamental level - are our brains actually comparable to how ML works (beyond some basic analogies)? Do we have an statistical engine running inside our heads, needing tremendous "CPU power" to do something remotely useful/accurate?<p>I'd say that no, and that that conceptual mismatch indicates that the next big iteration on AI will be something more like what D. Hofstadter advocates/researched.<p>(Using ML as a sidekick, why not. No need to trash out the current progress)