TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Guide to Synthesizing Adversarial Examples

123 pointsby anishathalyealmost 8 years ago

8 comments

anishathalyealmost 8 years ago
Last week, I wrote a blog post (<a href="https:&#x2F;&#x2F;blog.openai.com&#x2F;robust-adversarial-inputs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.openai.com&#x2F;robust-adversarial-inputs&#x2F;</a>) about how it&#x27;s possible to synthesize really robust adversarial inputs for neural networks. The response was great, and I got several requests to write a tutorial on the subject because what was already out there wasn&#x27;t all that accessible. This post, written in the form of an executable Jupyter notebook, is that tutorial!<p>Security&#x2F;ML is a fairly new area of research, but I think it&#x27;s going to be pretty important in the next few years. There&#x27;s even a very timely Kaggle competition about this (<a href="https:&#x2F;&#x2F;www.kaggle.com&#x2F;c&#x2F;nips-2017-defense-against-adversarial-attack" rel="nofollow">https:&#x2F;&#x2F;www.kaggle.com&#x2F;c&#x2F;nips-2017-defense-against-adversari...</a>) run by Google Brain. I hope that this blog post will help make this really neat area of research slightly more approachable&#x2F;accessible! Also, the attacks don&#x27;t require that much compute power, so you should be able to run the code from the post on your laptop.
评论 #14849017 未加载
评论 #14849901 未加载
评论 #14850150 未加载
0xdeadbeefbabealmost 8 years ago
&gt; This adversarial image is visually indistinguishable from the original, with no visual artifacts. However, it’s classified as “guacamole” with high probability!<p>May &quot;guacamole&quot; become as prominent as &quot;Alice and Bob&quot;.
dropalltablesalmost 8 years ago
This is delightful. As someone who uses AI&#x2F;ML&#x2F;MI&#x2F;... for security, I find there is not nearly enough understanding for how attackers can subvert decision systems in practice.<p>Keep up the good work!
评论 #14849967 未加载
jwattealmost 8 years ago
I have the feeling that the fact that imperceptible peturbations changes the labels, means that our networks&#x2F;models don&#x27;t yet look at the &quot;right&quot; parts of the input data.<p>Hopefully, this means research will focus on more robust classifiers based on weakness identified by adversarial approaches!
lacksconfidencealmost 8 years ago
Is the next step generating adversarial examples and injecting them into the training pipeline?
评论 #14852831 未加载
banealmost 8 years ago
I was really inspired by this paper at USENIX [1]. This looks like very <i>very</i> early research, but the outline it provides leaves lots of room for adversarial ML research.<p>Bonus, if you tackle this problem you get several semi-orthogonal technologies for &quot;free&quot;.<p>1 - <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;system&#x2F;files&#x2F;conference&#x2F;cset16&#x2F;cset16_paper-kaufman.pdf" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;system&#x2F;files&#x2F;conference&#x2F;cset16&#x2F;cset16...</a>
ytersalmost 8 years ago
If it is so easy to fool deep learning, why is it so hyped? Seems a great security risk.
评论 #14851488 未加载
评论 #14851143 未加载
jcimsalmost 8 years ago
This seems like a direct argument against camera only systems for autonomous vehicles.
评论 #14853180 未加载