TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Amyone successfully verify fawkes claims of protection against AI?

1 pointsby mel_llagunoabout 4 years ago
I was working on a web frontend for fawkes (https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;fawkes&#x2F;) based on the research outlined here (https:&#x2F;&#x2F;github.com&#x2F;Shawn-Shan&#x2F;fawkes) which claims to be able to use image level perturbations to protect against AI image recognition systems. From the paper, the following claim is made:<p>&quot;Our cloaking technique is not designed to fool a tracker who uses similarity matching. However, we believe our cloaking technique should still be effective against Amazon Rekognition, since cloaks create a feature space separation between original and cloaked images that should result in low similarity scores between them.&quot;<p>I was in the process of verifying this when I found that _no such guarantee_ can be made using fawkes. I documented my experiment here: https:&#x2F;&#x2F;github.com&#x2F;Shawn-Shan&#x2F;fawkes&#x2F;issues&#x2F;125.<p>Using Azure Face and AWS Rekognition, perturbed images regardless of the settings resulted in high degrees of similarity. In addition, I did a separate test using a perturbed image and another image with the same subject which _still resulted in a high degree of similarity_.<p>While the README claims &quot;Python scripts that can test the protection effectiveness will be ready shortly.&quot;, no such scripts have been provided.<p>According to the fawkes website, their application has been downloaded over 300k times. If the protection they claim to provide is ineffective, then there are _a lot of people_ exposed out there with a misguided sense of being protected. This situation is further complicated by the apparent legitimacy of this tool due to its presentation at Usenix 2020 and its inclusion in pypi.<p>My question is this - has anyone actually validated the efficacy of this algorithm and have reproducible results to share? Or is fawkes yet another example of AI snake oil?

1 comment

compressedgasabout 4 years ago
I think that generative adversarial networks make the method ineffective. During the training of a GAN, the same type of distortions are applied to its training data to force it to generalize.
评论 #26994869 未加载