The AMA highlights one of the deficiencies of Markdown. Prof Hinton has attempted to number many of his responses to the highest rated thread, but they're all coming up as 1. 1. 1. 1.
I'm having trouble understanding why the success of pooling would be deemed unfortunate. Max-pooling or average-pooling are essentially learning something about a pair of contiguous features: while max-pooling saves only the most prominent/largest value, average pooling compresses information using the mean. Saying this is unfortunate from Hinton's standpoint amounts to saying that it is very unlikely to see any sort of pooling behavior at the level of neuronal populations. At a physiological and psychological level, what would pooling equate to?
>I guess we should just train an RNN to output a caption so that it can tell us what it thinks is there. Then maybe the philosophers and cognitive scientists will stop telling us what our nets cannot do.<p>I wonder if he knew about the Stanford paper that demonstrates this? Or if he just guessed this would happen.<p><a href="http://cs.stanford.edu/people/karpathy/deepimagesent/" rel="nofollow">http://cs.stanford.edu/people/karpathy/deepimagesent/</a>
Url changed from <a href="http://www.kdnuggets.com/2014/12/geoff-hinton-ama-neural-networks-brain-machine-learning.html" rel="nofollow">http://www.kdnuggets.com/2014/12/geoff-hinton-ama-neural-net...</a>, which points to this.