1.) For the umpteenth time, they're not black boxes. We can inspect <i>everything</i> in the structure.<p>2.) "a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)" may have "described a method for peering into" the not-black box of a convnet two years ago, but Oxford researchers published on in in 2013.<p>3.) Gushing about how understanding convolutional networks can help confirm the grandmother cell hypothesis in real brains is embarrassing under all circumstances but should be particularly so when thorough examinations from real brains just came out to the considerable detriment of said hypothesis.
<a href="http://www.cell.com/cell/fulltext/S0092-8674(17)30538-X" rel="nofollow">http://www.cell.com/cell/fulltext/S0092-8674(17)30538-X</a><p>Nothing wrong with making visualizations of your nets, but I'm less than impressed by the reporting.
There are two things that ML/AI developers are going to have to deal with once the technologies become widespread in things like self-driving cars, hiring/firing decisions, and the criminal justice system:<p>"Why did it do that?"<p>and<p>"Make it stop doing that!"<p>The first time a self-driving car accident results in a court case, these things are going to come up. I very much doubt that people are going to be satisfied without clear explanations, and they shouldn't be. When these systems take on roles of increasing importance to society, some level of accountability is going to be necessary.
If I'm reading this correctly, it's old news. They're just tracing the activation of kernels. You can see examples in this wikipedia article: <a href="https://en.wikipedia.org/wiki/Kernel_(image_processing)" rel="nofollow">https://en.wikipedia.org/wiki/Kernel_(image_processing)</a><p>This one's cool too: <a href="http://scs.ryerson.ca/~aharley/vis/conv/" rel="nofollow">http://scs.ryerson.ca/~aharley/vis/conv/</a>
I like it. I've seen experiments that break out eigenvectors of a neural network, which is like being given a dictionary in a foreign language. It's precise, but you still have to figure out what each eigenvector means. This technique is like having a translating dictionary. It's less precise but it lets you reason about the network with a familiar visual vocabulary.