TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Deep Neural Networks Are Easily Fooled

183 pointsby sinwaveover 10 years ago

18 comments

zackchaseover 10 years ago
These arguments were introduced by Szegedy et. al. earlier this year in this paper: <a href="http://cs.nyu.edu/~zaremba/docs/understanding.pdf" rel="nofollow">http:&#x2F;&#x2F;cs.nyu.edu&#x2F;~zaremba&#x2F;docs&#x2F;understanding.pdf</a>. Geoff Hinton addressed this matter in his Reddit AMA last month.<p>The results are not specific to neural networks (similar techniques could be used to fool logistic regression). The problem is that ultimately a trained network relies heavily on certain activation pathways which can be precisely targeted (given full knowledge of the network) to fool networks into misclassification on data points which might to a human seem imperceptibly changed from those which are correctly classified. It is important to understand adversarial cases, but unreasonable to get carried away with sweeping pronouncements about what this does or doesn&#x27;t about all neural networks, let alone intelligence generally, or the entire enterprise of AI research, as seems to happen after a splashy headline.
评论 #8723406 未加载
评论 #8722506 未加载
评论 #8724616 未加载
评论 #8722979 未加载
评论 #8722150 未加载
评论 #8723147 未加载
评论 #8722196 未加载
akiselevover 10 years ago
We humans are as brilliant at pattern matching as we are in finding patterns that aren&#x27;t really there, not just with our vision but with our understanding of probability , randomness, and even cause &amp; effect. Thankfully, our brains are very complicated machines that can recognize a stucko wall or a cloud and invalidate the false identification of a face or unicorn or whatever based on that context.<p>With that in mind, is it really surprising that [m]any of our attempts at emulating intelligence can be easily fooled? An untold number of species have evolved to do exactly the same thing: exploit the pattern matching errors of predators to disguise themselves as leaves or tree branches or venomous animals that the predator avoids like the plague. DNNs seem to be relatively new and we&#x27;ve got a long ways to go, so is this a fundamental problem with the theoretical underpinning or do we just need to train them with far more contextualized data (for lack of a better phrase)?<p>Is there any chance of us having accurate DNNs if we can, as if gods during the course of natural selection, peek into the brain of predators (algorithms) and reverse engineer failures (disguises for prey) like this?
评论 #8721920 未加载
monochrover 10 years ago
I have nothing intelligent to say without reading the full paper...<p>...But, how different is this from the various optical illusions humans fall for? I mean we can&#x27;t exactly tell the difference between a rabbit and duck ourselves[1] so isn&#x27;t it just a universal property of all neural-network like systems that there will be huge areas of mis-classifications for which there hasn&#x27;t been specific selection?<p>[1] <a href="http://mathworld.wolfram.com/Rabbit-DuckIllusion.html" rel="nofollow">http:&#x2F;&#x2F;mathworld.wolfram.com&#x2F;Rabbit-DuckIllusion.html</a>
评论 #8722952 未加载
cLeEOGPwover 10 years ago
The vulnerability exploits imperfections in the NN weights. To avoid this kind of mismatch all you need to do is shift the same image by 1 pixel (assuming recognition is done per pixel), and you can cross check results to check if an error occurred.<p>Human brain recognizes better because it can sample the image many times from many slightly different angles. There&#x27;s a reason saccade (<a href="http://en.wikipedia.org/wiki/Saccade" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Saccade</a>) exists.
评论 #8721863 未加载
Animatsover 10 years ago
There was a similar result a few months ago for another type of machine learning. (That&#x27;s note 26 in this paper.) The problem seemed to be that the training process produces results which are too near boundaries in some dimension, and are thus very sensitive to small changes. Such models are subject to a sort of &quot;fuzzing attack&quot;, where the input is changed slightly and the output changes drastically.<p>There are two parts of this process that are kind of flaky. The problem above is one of them. The other part is feature extraction where the feature set is learned from the training set. The features thus selected are chosen somewhat randomly and are very dependent on the training set. It&#x27;s amazing to me that works at all. Earlier thinking was to have some canonical set of features (vertical lines, horizontal lines, various kinds of curves, etc.), the idea being to mimic early vision, the processing that happens in the retina. Automatic feature choice apparently outperforms that, but may not really be working as well as previously believed.<p>It&#x27;s great seeing all this progress being made.
评论 #8725723 未加载
benanneover 10 years ago
The discussion about this paper on r&#x2F;MachineLearning is quite insightful and worth reading: <a href="http://www.reddit.com/r/MachineLearning/comments/2onzmd/deep_neural_networks_are_easily_fooled_high/" rel="nofollow">http:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;2onzmd&#x2F;deep...</a>
MrQuincleover 10 years ago
I start to like this way of looking for false positives or false negatives more and more.<p>It would be interesting to introduce some kind of aspects known from the human brain and see if the misclassified items &quot;move&quot; in some conceptually understandable direction.<p>* Introduce time. Humans are not just image classifiers; humans are able to recognize objects in visual streams of images. Such streams can be seen as latent variables that introduce correlations over time as well as space. What constitutes spatial noise might very well be influenced in our brains by the temporal correlations we see as well.<p>* Introduce saccades. A computer is only able to see a picture from one viewpoint. Our eyes undergo saccades and microsaccades. That&#x27;s an unfair advantage for us, being able to see a picture multiple times from different directions!<p>* Introduce the body. We can move around an object. This again introduces correlations that 1.) are available to us, and 2.) might define priors even when we are not able to move around the picture. In other words, we can (unconsciously) rotate things in our head.
评论 #8727910 未加载
larrydagover 10 years ago
Another journal paper covering the same thing.<p><a href="http://arxiv.org/abs/1312.6199" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1312.6199</a>.<p>And the article I got that references it.<p><a href="http://www.i-programmer.info/news/105-artificial-intelligence/7352-the-flaw-lurking-in-every-deep-neural-net.html" rel="nofollow">http:&#x2F;&#x2F;www.i-programmer.info&#x2F;news&#x2F;105-artificial-intelligenc...</a>
评论 #8725323 未加载
comexover 10 years ago
One might say that Picasso&#x27;s Bull is a human equivalent of this: he &quot;evolved&quot; a sequence of images and ended up with something that has very few features of a bull, but nevertheless gets recognized by humans as such.<p>Then again, unlike the neural networks in the paper, humans would be capable of classifying abstract images into a separate category if asked.
ifdefdebugover 10 years ago
I know literally nothing about this science, so that paper had me concerned about the following question:<p>Given a visual face recognition door lock or similar system. If I want to break such a door lock, can I install that system at home, train it with secretly taken pictures of an authorized person, and evolve some kind of key picture with my home system until I can show it to the target door lock and fool it into giving me access?<p>OK this is a very simplified way to put the question, but is that something this paper would imply to be possible (in a more sophisticated way)?
评论 #8723724 未加载
评论 #8722817 未加载
评论 #8722319 未加载
评论 #8723864 未加载
评论 #8722147 未加载
jacobsimonover 10 years ago
I don&#x27;t have too much of a problem with this actually, because a lot of the &quot;nonsense&quot; images actually bear strong resemblance to the objects. The gorilla images clearly look like a gorilla, the windsor tie images clearly show a collar and a tie. The image coloring is way off of course, but the gradients seem about right.
crimsonalucardover 10 years ago
If we could find out the selection criteria behind each layer of the neural network for the human visual cortex we could possibly build something more accurate.<p>Although I doubt the visual cortex is a simple feed forward network like the one used in the paper. It&#x27;s likely to have a non linear structure that&#x27;s significantly more complex.
bitLover 10 years ago
So, deep neural networks are like artists, able to see a structure in chaos? Like when Michelangelo looked at a large stone, seeing David there immediately, so do DNNs recognize lions in white noise? We should applaud introduction of phantasy and imagination into science ;-)
fallenpegasusover 10 years ago
What this tells me that there probably exist deeply weird images that would be recognized as something by one person or by very few people, but would be just an unrecognizable mash of colors and lines to everyone else.
yummyfajitasover 10 years ago
I wish they explained why evolutionary algorithms were used. They seem to suggest gradient ascent also works - I wonder what the key criteria are for constructing good adversarial images?
hippichover 10 years ago
This brings interesting question. Is it possible to hack human brain? Will specific set of stimuli make brain react in certain way?
评论 #8723668 未加载
SeanDavover 10 years ago
If the shoe was on the other foot, I can imagine a race of super computer AI&#x27;s administrating a similar test to humans and saying look at the puny human vision system. It is fooled easily by simple optical illusions that wouldn&#x27;t fool even a 2 year old AI. Clearly, there are questions about the generality of the human vision system and perhaps it is not fit for purpose...
评论 #8724036 未加载
robgover 10 years ago
So are human brains.