Good to see someone testing the limits of neural nets, rather just squeezing a few percent of performance on an artificial benchmark.<p>That said, is this result really all that surprising? Especially given the results demonstrated in that paper on fooling DNNs from 2015 and visualization experiments a-la Deep Dream.<p>Unless you believe in networks "painting" stuff, Deep Dream demonstrated that neural networks capture and store certain chunks of their training data and you can get those back out if you're clever enough.<p>That other paper[1] demonstrated that a trained DNN can classify noise as a particular label with very high confidence, as long as you construct that noise carefully enough. This hints at the fact that DNNs may do matching by applying some complex transformation that <i>usually</i> results in the correct answer, but does not necessarily capture the underlying patterns. (Kind of like guessing about the weather by telltale signs, without knowing anything air pressure, currents and so on.)<p>[1] - <a href="http://www.evolvingai.org/fooling" rel="nofollow">http://www.evolvingai.org/fooling</a>