TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Optical illusions that flummox computers

137 pointsby pixelcortabout 8 years ago

15 comments

mholtabout 8 years ago
I&#x27;m finishing a survey paper that discusses research with adversarial examples plus about 9 or 10 other attacks on or weaknesses of neural networks (and machine learning models in general). Overall conclusion is: much like the early Internet, we&#x27;re rapidly advancing towards machine learning tech that works but isn&#x27;t secure. And 20 years later, we&#x27;re still trying to make the Internet secure...<p>If neural networks are here to stay, maybe we should slow down their public deployment for a moment and understand them better first. It would be ideal to find fundamental structural&#x2F;algorithmic changes that can harden them rather than relying on heuristics or other &quot;wrappers&quot; to make input&#x2F;output safe to use in autonomous environments. The more that is &quot;extra&quot;, the less those security features will be implemented. (We see this rampantly on the web today with HTTPS.)
评论 #14121887 未加载
评论 #14123017 未加载
评论 #14123910 未加载
candiodariabout 8 years ago
TLDR: high-energy (sudden changes from one pixel to the next) overlays &quot;fool&quot; AIs. If you think about it you can see why : they change the statistical properties of an image a lot &quot;without&quot; disturbing it.<p>Less so, obviously, if you do something like downsample it or otherwise soften or ... with filters first. Nor do they fool neural networks with attention (they simply at some point decide it&#x27;s not worth looking at and identify the picture by something else).<p>And the ridiculous example given does not work without being able to read the mind of the neural network (the misidentified panda).<p>Most neural network classification mistakes are &quot;understandable&quot; (e.g. look at the misidentified carousel). These are really 99.99% or more of the total mistakes make. Also that network probably needs more Indian elephants in it&#x27;s training set (kids make stupid mistakes classifying animals they&#x27;ve never seen or only seen very few times as well [1]).<p>Given how a lot of animals look, I wonder if this doesn&#x27;t work on &quot;real&quot; brains as well. I for one have trouble seeing zebras in pictures, and it&#x27;s of course not for lack of contrast. Counting them or accurately judging distance is just out of the question. But many animals look way more colorful and contrast-rich than seems advisable, from chickens, of course peacocks, to ladybugs.<p>A number of optical illusions seem based on high contrast patterns being included in images. Especially if, like in the examples here, the high contrast patterns don&#x27;t line up with the objects in the image (e.g. moving a vertical and horizontal slit filter over an image and you will not be able to see through it, however in any freeze frame you won&#x27;t have that problem).<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=bnJ8UpvdTQY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=bnJ8UpvdTQY</a>
zeteoabout 8 years ago
Burying the lede:<p>&gt; The fact that the same fooling images can scramble the “minds” of AI systems developed independently by Google, Mobileye, or Facebook, reveals weaknesses that are apparently endemic to contemporary AI as a whole. [...] “All these networks are agreeing that these crazy and non-natural images are actually of the same type. That level of convergence is really surprising people.”
评论 #14121366 未加载
评论 #14121894 未加载
itchyjunkabout 8 years ago
In nature, the natural neural network called brain seems to weigh in the information from multiple sources instead of relying on just one. Not just in terms of number of sense but even for the same sense. Vision for example: it gets input from 2 eyes and if the data between the two is too inconstant, it sometimes drops it.<p>The machine learning systems will probably use similar tricks at some point. You might need to double the resource needed to process data twice or more, but you end up with a harder to fool systems. At least with the current adversarial attacks.<p>------------------<p>[0] <a href="https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;two-eyes-two-views&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;two-eyes-two-view...</a><p>[1] <a href="http:&#x2F;&#x2F;www.bbc.com&#x2F;future&#x2F;bespoke&#x2F;story&#x2F;20150130-how-your-eyes-trick-your-mind&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.bbc.com&#x2F;future&#x2F;bespoke&#x2F;story&#x2F;20150130-how-your-ey...</a>
评论 #14122344 未加载
评论 #14124274 未加载
ameliusabout 8 years ago
What happens if you train the system with these optical illusions in place, as well as the original images? Will it become harder to find new illusions? Or will illusions always be able to trick the system no matter how many illusions you trained with?<p>Remark: I noticed that even a watermark in the lower-left of the image (as you see on TV) can totally mess up DL prediction.
评论 #14121408 未加载
paulsutterabout 8 years ago
The real issue here is that imagenet doesn&#x27;t have enough funny glasses. Same reason detection networks will think that gravel is broccoli, a network learns just enough to distinguish between the categories it&#x27;s presented.<p>Improvements in datasets, transfer learning, and online learning will help. Unfortunately this underscores the issue that giants such as google have more pictures of funny glasses than anyone else...
评论 #14121877 未加载
评论 #14122170 未加载
mcguireabout 8 years ago
&quot;<i>To add to the difficulty, it’s not always clear why certain attacks work or fail. One explanation is that adversarial images take advantage of a feature found in many AI systems known as “decision boundaries.” These boundaries are the invisible rules that dictate how a system can tell the difference between, say, a lion and a leopard. A very simple AI program that spends all its time identifying just these two animals would eventually create a mental map. Think of it as an X-Y plane: in the top right it puts all the leopards it’s ever seen, and in the bottom left, the lions. The line dividing these two sectors — the border at which lion becomes leopard or leopard a lion — is known as the decision boundary.</i>&quot;<p>I am not sure I buy this theory. Moving an image across a nearby boundary shouldn&#x27;t result in the image producing a <i>higher</i> confidence value, should it?<p>I&#x27;m thinking of the panda&#x2F;gibbon example in the article.
评论 #14124270 未加载
JulianMorrisonabout 8 years ago
Uhhh, this has me a little worried that they are digging in territory that could contain basilisks. Humans are neural networks. Please do not generalise a means of hacking me? Thank you and much obliged.
评论 #14123442 未加载
rhaps0dyabout 8 years ago
&quot;My, what a shiny red motorbike.&quot;<p>Totally what happens inside of a NN :D
adangertabout 8 years ago
You know you probably could extend this to AI magic tricks, where you are able to make certain objects do things they are not supposed to, perhaps with an AI viewing video.
verroqabout 8 years ago
Note that to generate perturbations you need access to the underlying model. The attack is just optimising towards a specific class with added cost for visual differences.
评论 #14121390 未加载
McKayDavisabout 8 years ago
Clever 2 level pun with &quot;Hi&quot; hidden in the Magic Eye &lt;-&gt; Magic-AI.
mrkgnaoabout 8 years ago
More blatant &quot;SEO walks into a headline&quot; jokes, anyone?
rochellleabout 8 years ago
But these aren&#x27;t optical illusions!
评论 #14123989 未加载
spyckie2about 8 years ago
Actually a pretty interesting article.<p>The ease at which you can &#x27;fool&#x27; machine learning right now adds an additional layer to practical machine learning in the wild - risks from malicious attacks.<p>Imagine someone putting up a lawn sign that tricks self driving cars into seeing something that isn&#x27;t there and applying the wrong behavioral pattern because of it. Or even simpler, someone taping a sticker over the self driving car&#x27;s cameras that cause erratic behavior. Can have really bad consequences and seems really simple to do.
评论 #14121399 未加载