TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A simple Jpeg defense for the attack from OpenAI authors

2 点作者 VinayUPrabhu将近 8 年前
One thing I do when a new attack is published is to see if the &#x27;jpeg&#x27; defense works. That is, will the jpeg compressed version of the adversarial image retain its adversarial threat? Turns out the attack from OpenAI authors does not pass this jpeg defense test.<p>Please note:<p>1: There have been a couple of studies of the effect of JPG compression on adversarial images. See:<p>https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1705.02900.pdf https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1608.00853.pdf<p>2: This is NOT a &#x27;Voila - Busted!&#x27; dissemination. The most straightforward idea for a counter-attack is to include jpeg compression part of the transformations set (T) in the paper. That said, the defender only has to concoct a custom transformation that is not covered in T. For ex, I also found that a scanned paper printout of the image did not retain its adversarial threat (posted on the original blog).<p>3: This is a work in progress. Github link: https:&#x2F;&#x2F;github.com&#x2F;vinayprabhu&#x2F;Jpeg_Defense

暂无评论

暂无评论