TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A Guide to Synthesizing Adversarial Examples

123 点作者 anishathalye将近 8 年前

8 条评论

anishathalye将近 8 年前
Last week, I wrote a blog post (<a href="https:&#x2F;&#x2F;blog.openai.com&#x2F;robust-adversarial-inputs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.openai.com&#x2F;robust-adversarial-inputs&#x2F;</a>) about how it&#x27;s possible to synthesize really robust adversarial inputs for neural networks. The response was great, and I got several requests to write a tutorial on the subject because what was already out there wasn&#x27;t all that accessible. This post, written in the form of an executable Jupyter notebook, is that tutorial!<p>Security&#x2F;ML is a fairly new area of research, but I think it&#x27;s going to be pretty important in the next few years. There&#x27;s even a very timely Kaggle competition about this (<a href="https:&#x2F;&#x2F;www.kaggle.com&#x2F;c&#x2F;nips-2017-defense-against-adversarial-attack" rel="nofollow">https:&#x2F;&#x2F;www.kaggle.com&#x2F;c&#x2F;nips-2017-defense-against-adversari...</a>) run by Google Brain. I hope that this blog post will help make this really neat area of research slightly more approachable&#x2F;accessible! Also, the attacks don&#x27;t require that much compute power, so you should be able to run the code from the post on your laptop.
评论 #14849017 未加载
评论 #14849901 未加载
评论 #14850150 未加载
0xdeadbeefbabe将近 8 年前
&gt; This adversarial image is visually indistinguishable from the original, with no visual artifacts. However, it’s classified as “guacamole” with high probability!<p>May &quot;guacamole&quot; become as prominent as &quot;Alice and Bob&quot;.
dropalltables将近 8 年前
This is delightful. As someone who uses AI&#x2F;ML&#x2F;MI&#x2F;... for security, I find there is not nearly enough understanding for how attackers can subvert decision systems in practice.<p>Keep up the good work!
评论 #14849967 未加载
jwatte将近 8 年前
I have the feeling that the fact that imperceptible peturbations changes the labels, means that our networks&#x2F;models don&#x27;t yet look at the &quot;right&quot; parts of the input data.<p>Hopefully, this means research will focus on more robust classifiers based on weakness identified by adversarial approaches!
lacksconfidence将近 8 年前
Is the next step generating adversarial examples and injecting them into the training pipeline?
评论 #14852831 未加载
bane将近 8 年前
I was really inspired by this paper at USENIX [1]. This looks like very <i>very</i> early research, but the outline it provides leaves lots of room for adversarial ML research.<p>Bonus, if you tackle this problem you get several semi-orthogonal technologies for &quot;free&quot;.<p>1 - <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;system&#x2F;files&#x2F;conference&#x2F;cset16&#x2F;cset16_paper-kaufman.pdf" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;system&#x2F;files&#x2F;conference&#x2F;cset16&#x2F;cset16...</a>
yters将近 8 年前
If it is so easy to fool deep learning, why is it so hyped? Seems a great security risk.
评论 #14851488 未加载
评论 #14851143 未加载
jcims将近 8 年前
This seems like a direct argument against camera only systems for autonomous vehicles.
评论 #14853180 未加载