TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Geoff Hinton AMA – Deep Learning's Biological Inspiration

55 pointsby zackchaseover 10 years ago

4 comments

timClicksover 10 years ago
The AMA highlights one of the deficiencies of Markdown. Prof Hinton has attempted to number many of his responses to the highest rated thread, but they're all coming up as 1. 1. 1. 1.
msaroufimover 10 years ago
I'm having trouble understanding why the success of pooling would be deemed unfortunate. Max-pooling or average-pooling are essentially learning something about a pair of contiguous features: while max-pooling saves only the most prominent/largest value, average pooling compresses information using the mean. Saying this is unfortunate from Hinton's standpoint amounts to saying that it is very unlikely to see any sort of pooling behavior at the level of neuronal populations. At a physiological and psychological level, what would pooling equate to?
评论 #8726466 未加载
bglazerover 10 years ago
&gt;I guess we should just train an RNN to output a caption so that it can tell us what it thinks is there. Then maybe the philosophers and cognitive scientists will stop telling us what our nets cannot do.<p>I wonder if he knew about the Stanford paper that demonstrates this? Or if he just guessed this would happen.<p><a href="http://cs.stanford.edu/people/karpathy/deepimagesent/" rel="nofollow">http:&#x2F;&#x2F;cs.stanford.edu&#x2F;people&#x2F;karpathy&#x2F;deepimagesent&#x2F;</a>
评论 #8727762 未加载
dangover 10 years ago
Url changed from <a href="http://www.kdnuggets.com/2014/12/geoff-hinton-ama-neural-networks-brain-machine-learning.html" rel="nofollow">http:&#x2F;&#x2F;www.kdnuggets.com&#x2F;2014&#x2F;12&#x2F;geoff-hinton-ama-neural-net...</a>, which points to this.