The concept is very cool, but it's not surprising that dimensionality reduction through a non-linear process is going to result in sections of input parameters that yield incorrect (and weird) results. Our visual system, while not the same as these systems, is extremely well developed and robust, yet the list of optical illusions that can fool us is quite long. In this study, the optical illusions are really just surprising because they aren't like anything that would fool humans.<p>This isn't to take away from the research; the most interesting result was just how close to valid inputs these erroneously classified images are.<p>But again, this isn't some fatal flaw. This summary completely neglects the fact that the paper <i>also</i> recommends that -- just like distorted images are added to training sets today (you wouldn't want something common like optical aberration from the camera lens screwing up your classifier) -- in the future, these adversarial examples should be added to training sets to mitigate their effects.<p>> <i>In some sense, what we describe is a way to traverse the manifold represented by the network in an efficient way (by optimization) and finding adversarial examples in the input space. The adversarial examples represent low-probability (high-dimensional) “pockets” in the manifold, which are hard to efficiently find by simply randomly sampling the input around a given example. Already, a variety of recent state of the art computer vision models employ input deformations during training for increasing the robustness and convergence speed of the models [9, 13]. These deformations are, however, statistically inefficient, for a given example: they are highly correlated and are drawn from the same distribution throughout the entire training of the model. We propose a scheme to make this process adaptive in a way that exploits the model and its deficiencies in modeling the local space around the training data.</i>[1]<p>[1] <a href="http://cs.nyu.edu/~zaremba/docs/understanding.pdf" rel="nofollow">http://cs.nyu.edu/~zaremba/docs/understanding.pdf</a>