The end result is a sparse network that more closely "mimics the brain". Is this as significant as it sounds or is there something I am missing here?<p>Original Paper here - <a href="https://numenta.com/assets/pdf/research-publications/papers/Sparsity-Enables-50x-Performance-Acceleration-Deep-Learning-Networks.pdf" rel="nofollow">https://numenta.com/assets/pdf/research-publications/papers/...</a>
Seems like Numenta is out of ideas about how the brain works. What happened to them?<p>In this work they simply took a standard neural network, sparsified weights and activations, trained it with standard gradient descent and backpropagation, and called it a day.<p>This would have been unthinkable 10 years ago when they were after biological plausibility of their algorithms.