TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Amyone successfully verify fawkes claims of protection against AI?

1 点作者 mel_llaguno大约 4 年前
I was working on a web frontend for fawkes (https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;fawkes&#x2F;) based on the research outlined here (https:&#x2F;&#x2F;github.com&#x2F;Shawn-Shan&#x2F;fawkes) which claims to be able to use image level perturbations to protect against AI image recognition systems. From the paper, the following claim is made:<p>&quot;Our cloaking technique is not designed to fool a tracker who uses similarity matching. However, we believe our cloaking technique should still be effective against Amazon Rekognition, since cloaks create a feature space separation between original and cloaked images that should result in low similarity scores between them.&quot;<p>I was in the process of verifying this when I found that _no such guarantee_ can be made using fawkes. I documented my experiment here: https:&#x2F;&#x2F;github.com&#x2F;Shawn-Shan&#x2F;fawkes&#x2F;issues&#x2F;125.<p>Using Azure Face and AWS Rekognition, perturbed images regardless of the settings resulted in high degrees of similarity. In addition, I did a separate test using a perturbed image and another image with the same subject which _still resulted in a high degree of similarity_.<p>While the README claims &quot;Python scripts that can test the protection effectiveness will be ready shortly.&quot;, no such scripts have been provided.<p>According to the fawkes website, their application has been downloaded over 300k times. If the protection they claim to provide is ineffective, then there are _a lot of people_ exposed out there with a misguided sense of being protected. This situation is further complicated by the apparent legitimacy of this tool due to its presentation at Usenix 2020 and its inclusion in pypi.<p>My question is this - has anyone actually validated the efficacy of this algorithm and have reproducible results to share? Or is fawkes yet another example of AI snake oil?

1 comment

compressedgas大约 4 年前
I think that generative adversarial networks make the method ineffective. During the training of a GAN, the same type of distortions are applied to its training data to force it to generalize.
评论 #26994869 未加载