I am curious why isn't the network always in an "interference" state, but sometimes it collapses into a "restricted" ONB?<p>> <i>the neural networks we observe in practice are in some sense noisily simulating larger, highly sparse networks</i><p>This seems somewhat related to a point made by Ilya Sutskever here [1]: NNs can be though of an approximation to the Kolmogorov compressor. Speculating, one could say any network is a projection of the ideal compressor (which argualy perfectly represents all n-features in an n-dimensional ONB) into a lower dimentional space, hence the interference. But why is there not always such an interference?<p>[1] <a href="https://www.youtube.com/watch?v=AKMuA_TVz3A">https://www.youtube.com/watch?v=AKMuA_TVz3A</a>
This is one of the best presentations of such a complex topic, or <i>any</i> topic for that matter that I've seen in a long time!<p>To the authors if they happen to find themselves here, I say: bravo!