Cool, with caveats. Although this is interesting for people who know how neural network structures are built and generally how backpropagation and its successor training algorithms work, it isn't particularly _informative_ as a visualization. It does show how easy it is to encode information visually, compared with how difficult it can be for the viewer to _decode_ that same information. This is a common problem with "information", as opposed to "scientific data" (such as volumetric scan data or vector maps) visualizations, where there's no obvious physical correlative that we can use to help us decode the information as viewers.