TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Coding the History of Deep Learning

256 pointsby saipover 7 years ago

12 comments

iluvmylifeover 7 years ago
If you want a more nuanced research on the history on deep learning in neural networks, here is an excellent historical survey paper: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1404.7828" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1404.7828</a>
评论 #15313461 未加载
sun_n_surfover 7 years ago
Least squares, gradient descent and linear regression separately? I get that he wants to point out the profundity and universality of the ideas encompassed in those techniques (&amp; models; least squares and gradient descent are rightly thought of as (numerical) techniques, whereas the linear regression models is a, well, model) but that is like saying that arithmetic is fundamental to deep learning. Essentially, this &quot;history&quot; only takes you to 1947 and Minsky.
评论 #15313012 未加载
bluetwoover 7 years ago
I know it is popular to say that these techniques are based on how the brain works, but when I read about them, I have my doubts.<p>Can anyone take a real world example of human behavior and show me how it relates to how these techniques predict humans will behave?<p>I love the field but feel like there is a temptation to take giant leaps not supported by other observations.
评论 #15314449 未加载
评论 #15314123 未加载
评论 #15316476 未加载
评论 #15315256 未加载
评论 #15314746 未加载
houqpover 7 years ago
Would love to see mention of several of the main contributors to deep learning, such as Geoffrey Hinton, the “father” of deep learning, Andrew Ng and Demis Hassabis in future posts.
评论 #15313048 未加载
narenstover 7 years ago
The article mentions that GPUs are on average 50-200 times faster for deep learning, I’m curious on how he came to that number. It has a lot to do with the code and the frameworks used. I haven’t come across a good comparison, most figures seems to be taken out of the blue.
评论 #15314776 未加载
madhadronover 7 years ago
I find this history confusing. Legendre guessing by hand? Spaghetti on the wall? No mention of the massive work of Laplace and others that led up to Legendre and Gauss, or Gauss&#x27;s connection of the notion to probability? This is truly a bizarre view.<p>And then the idea that numerical optimization accounting for the slope was novel. How does he think that mathematicians calculated for the preceding centuries?<p>Linear regression springs full formed in the 1950&#x27;s and &#x27;60&#x27;s? What happened to Fisher and Student and Pearson and all the rest?<p>Where&#x27;s Hopfield? Where&#x27;s Potts? Where&#x27;s an awareness of the history of mathematics in general?
dpcxover 7 years ago
This seems like a great introduction to the history. I have a problem with it, though.<p>In the first example, the method compute_error_for_line_given_points is called with values 1, 2, [[3,6],[6,9],[12,18]]. Where did those values come from?<p>Later in that same example, there is an &quot;Error = 4^2 + (-1)^2 + 6^2&quot;. Where did <i>those</i> values come from?<p>Later, there&#x27;s another form: &quot;Error = x^5 - 2x^3 -2&quot; What about these?<p>There seem to be magic formulae everywhere, with no real explanation in the article about where they came from. Without that, I have no way of actually understanding this.<p>Am I missing something fundamental here?
评论 #15313472 未加载
评论 #15312632 未加载
评论 #15312473 未加载
评论 #15312363 未加载
terrabytesover 7 years ago
Spot on. I struggled with the mainstream deep learning&#x2F;machine learning MOOCs. I felt like they were to math heavy. However, I&#x27;m struggling on how to learn deep learning. I get polarized advice on it. Some argue that you need a degree or certificates from established MOOCs, others keep recommending me to do Kaggle challenges.<p>Has anyone managed to land a decent deep learning job without formal CS&#x2F;machine learning training? How did you approach it?
评论 #15311961 未加载
评论 #15312153 未加载
mellingover 7 years ago
“It’s been used in Andrew Karpathy’s deep learning course at Stanford,”<p>Andrew is now working at Tesla. I believe this is his course:<p><a href="http:&#x2F;&#x2F;cs231n.stanford.edu&#x2F;syllabus.html" rel="nofollow">http:&#x2F;&#x2F;cs231n.stanford.edu&#x2F;syllabus.html</a>
评论 #15313670 未加载
hamilyon2over 7 years ago
All of the scientists i love (even Heaviside!) in one article. I am so pleased!
ameliusover 7 years ago
No mention of who invented convnets?
fnlover 7 years ago
Which is a long-winded way to show that deep learning isn&#x27;t much more than a concatenation of glorified regression functions... :-)<p>&lt;ducks because=&quot;had to get that one out there&quot;&#x2F;&gt;<p>Edit: There we go with the downvotes, I knew it that deep learning guys can&#x27;t stand this claim (but it&#x27;s true, as the post itself goes to show in great length... :-))