I have always absolutely hated this diagram and think it should go away. I have also never seen somebody who understands the content of the diagram share it as a useful pedagogical tool.<p>For example, with CNNs, you are building up feature activation volumes based on the entire previous layer. The edges between layer N only connects to two nodes in layer N-1. What are the nodes even supposed to represent? This is not how CNNs works. This explains nothing and is actually just confusing.<p>This entire diagram should be re written using the block diagrams from the actual papers.
Wow, this article came up at the time and in 2017. And I even see a little comment I wrote back then in the helpful link dang provided.<p>It looks very different to me now than then. Mostly because for various reasons I actually know what all those networks are. And a fair percentage aren't normally considered neural networks at all (Belief networks, Markov Chains...). Other models are quite old (Kohonen networks, so old I studied them at school in the 90s), other things very broad categories that other classes may or may not fit into (feed forward network, autoencoder).<p>So the categories are essentially an incoherent mess or a useful cheat sheet for going through the literature, take your pick.<p>I see this now, where back then I just saw an impressive/incoherent mess and that makes me feel like maybe I'm learning something in my personal research project.
Discussed at the time: <a href="https://news.ycombinator.com/item?id=12751585" rel="nofollow">https://news.ycombinator.com/item?id=12751585</a><p>2017: <a href="https://news.ycombinator.com/item?id=15965159" rel="nofollow">https://news.ycombinator.com/item?id=15965159</a>
Note that while the article is originally from 2016, it was updated:<p>> [Update 22 April 2019] Included Capsule Networks, Differentiable Neural Computers and Attention Networks to the Neural Network Zoo; Support Vector Machines are removed; ...<p>The poster image is also updated.
Thanks for sharing. I remember designing my first neural network, my notepad was full of theses dots everywhere. The dots x lines representation helped me a lot visualizing what is a layer, what is a input, what is the output of that layer.