TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Adversarial.io – Fighting mass image recognition

249 pointsby petecooperover 4 years ago

22 comments

colincookeover 4 years ago
It&#x27;s interesting that they only tackle a single model architecture (a pretty common one). It makes me think that is is likely an attack technique which uses knowledge of the model weights to mess up image recognition (if you know the weights, there are some really nice techniques that can find the minimum change necessary to mess up the classifier).<p>Pretty cool stuff, but also if my assumption is correct it means that if you _didn&#x27;t_ use the widely available ImageNet weights for inception v3 then this attack would be less effective (or not even work). Given that most actors who you don&#x27;t want recognizing your images don&#x27;t open source their weights this may not scale&#x2F;or be very helpful...
评论 #26208717 未加载
评论 #26209881 未加载
nickvincentover 4 years ago
There&#x27;s a theme in this discussion that ML operators will just train new models on adversarially perturbed data. I don&#x27;t think this is necessarily true at all!<p>The proliferation of tools like this and the &quot;LowKey&quot; paper&#x2F;tool linked below (an awesome paper!) will fundamentally change the distribution of image data that exists. I think that widespread usage of this kind of tool should trend towards increasing the irreducible error of various computer vision tasks (in the same way that long term adoption of mask wearing might change the maximum accuracy of facial recognition).<p>Critically, while right now the people who do something like manipulate their images will probably be very privacy conscious or tech-interested people, tools like this seriously lower the barrier to entry. It&#x27;s not hard to imagine a browser extension that helps you perturb all images you upload to a particular domain, or something similar.
评论 #26208488 未加载
评论 #26208397 未加载
JohnPDickersonover 4 years ago
Folks interested in this kind of work should check out an upcoming ICLR paper, &quot;LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition&quot;, from Tom Goldstein&#x27;s group at Maryland.<p>Similar pitch -- use a small adversarial perturbation to trick a classifier -- but LowKey is targeted at industry-grade black-box facial recognition systems, and also takes into account the &quot;human perceptibility&quot; of the perturbation used. Manages to fool both Amazon Rekognition and the Azure face recognition systems almost always.<p>Paper: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2101.07922" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2101.07922</a>
sly010over 4 years ago
Can&#x27;t wait to read about Inception V4 being trained on adversarial.io for better noise resistance :)
评论 #26207513 未加载
carover 4 years ago
It’s really surprising to me how easily AI can be fooled. Maybe there is a fundamental difference between our visual system and what is represented in a visual recognition CNN. Could it be the complexity of billions of cells vs. the simplification of an AI, or something about the biology we haven’t yet accounted for?
评论 #26208503 未加载
评论 #26208500 未加载
评论 #26208439 未加载
评论 #26208192 未加载
评论 #26208764 未加载
评论 #26215505 未加载
评论 #26208214 未加载
dijksterhuisover 4 years ago
If folks are interested in this stuff, check out Fawkes:<p><a href="https:&#x2F;&#x2F;sandlab.cs.uchicago.edu&#x2F;fawkes&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sandlab.cs.uchicago.edu&#x2F;fawkes&#x2F;</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;Shawn-Shan&#x2F;fawkes" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Shawn-Shan&#x2F;fawkes</a><p><a href="http:&#x2F;&#x2F;people.cs.uchicago.edu&#x2F;%7Eravenben&#x2F;publications&#x2F;pdf&#x2F;fawkes-usenix20.pdf" rel="nofollow">http:&#x2F;&#x2F;people.cs.uchicago.edu&#x2F;%7Eravenben&#x2F;publications&#x2F;pdf&#x2F;f...</a>
nerdponxover 4 years ago
Switching the result from &quot;tabby&quot; to &quot;catamount&quot; is not nearly as &quot;adversarial&quot; as I expected. Is that really worth it?<p>Is the idea that it&#x27;s useful if you&#x27;re trying to stop targeted facial recognition of individual people?
评论 #26207511 未加载
loser777over 4 years ago
What happens when the perturbed images are processed by some noise removal method? On the crude end, even something like aggressive JPEG compression will tend to remove high frequency noise. There&#x27;s also more sophisticated work like Deep Image Prior [1], which can reconstruct images while discarding noise in a more &quot;natural&quot; way. Finally, on the most extreme end, what happens when someone hires an artist or builds a sufficiently good robot artist to create a photorealistic &quot;painting&quot; of the perturbed image?<p>There&#x27;s a lot of work on compressing&#x2F;denoising images so that only the human-salient parts are preserved, and without seeing this working past that I think it&#x27;s better to interpret &quot;adversarial&quot; in the machine learning sense only. Where &quot;adversarial&quot; means useful for understanding how models work, but not with any strong security implications.<p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1711.10925" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1711.10925</a>
endisneighover 4 years ago
Couldn&#x27;t you easily infer the attacking noise by comparing the original and the changed images? Once you have the attacking noise it would be pretty trivial to beat this, no?<p>I also don&#x27;t see how this would do much against object recognition or face recognition. More insight to the types of recognition this actually fights against would be helpful.
评论 #26207336 未加载
评论 #26207554 未加载
oneweekwonderover 4 years ago
This 2017 article &quot;Google’s AI thinks this turtle looks like a gun[0]&quot; made me realise ai in the near future, might need to take lethal action, based on flawed data. But then I just comfort myself with the following quote:<p>&quot;The ai does not love you, the ai does not hate you. But you are made out of atoms, it can use for something else.&quot;<p>[0]: <a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;2017&#x2F;11&#x2F;2&#x2F;16597276&#x2F;google-ai-image-attacks-adversarial-turtle-rifle-3d-printed" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;2017&#x2F;11&#x2F;2&#x2F;16597276&#x2F;google-ai-image-...</a>
telesillaover 4 years ago
I&#x27;d pay for API access for this, are there any plans for this?
mijailover 4 years ago
As a thought experiment this is cool but from a practical perspective it’s too focused on a specific architecture and if anything adding perturbations might (slightly) help the training process.<p>From the thought experiment side. I think the moral implications cut both ways. Mass image recognition is not always bad - think about content moderation or the transfer of images of abuse. As a society we want AI to flag these things.
ignorancepriorover 4 years ago
We&#x27;re still in the phase where different models can play cat and mouse, but I wouldn&#x27;t count on this lasting very long. Given that we know it&#x27;s possible to correctly recognize these perturbed images (proof: humans can), it&#x27;s only a matter of time until AI catches up and there&#x27;s nothing you can do to prevent an image of your face from being identified immediately.
hivacruzover 4 years ago
Just tested on some movie snapshots, doesn&#x27;t seem to do the trick to me on Google Images (and the noise is very noticeable).<p>Shame, I thought I would be able to trick Google Images and stop giving away answers for my movie quiz game that easily.<p>The only method that works randomly as an anti-cheat measure is to revert horizontally the image. It fools Google Images a lot of times.
bspammerover 4 years ago
I love how you can almost see a lynx in the attacking noise. I&#x27;d be interested to know if that&#x27;s my brain spotting a pattern that isn&#x27;t there, or if that&#x27;s genuinely just the mechanism for the disruption.
评论 #26207791 未加载
评论 #26208043 未加载
Chris2048over 4 years ago
I&#x27;m skeptical - what happens when NNs are no longer susceptible to simple adversarial examples, or they take proportionally more power to compute?<p>I&#x27;d sooner spend the effort on legal challenges.
djabattover 4 years ago
Absolutely the coolest project I read about this year. It will be an arms race between hiding and finding. I went through this with web and email spam.
zaikover 4 years ago
Given that the most likely input is privacy sensitive, I would prefer a small CLI tool over uploading files to some server.
puttycatover 4 years ago
Great idea. Please consider distributing this as open an source downloadable app to avoid privacy concerns.
CivBaseover 4 years ago
Begun, the AI wars have.
评论 #26209410 未加载
评论 #26207968 未加载
kaoDover 4 years ago
Why does this page request access to my VR devices?
forrestthewoodsover 4 years ago
If my human eyes can identify a picture then, eventually, so too will algorithms. This is fundamentally a dead end concept.<p>&gt; it works best with 299 x 299px images that depict one specific object.<p>Wow. How incredibly useful.
评论 #26208040 未加载