Last week, I wrote a blog post (<a href="https://blog.openai.com/robust-adversarial-inputs/" rel="nofollow">https://blog.openai.com/robust-adversarial-inputs/</a>) about how it's possible to synthesize really robust adversarial inputs for neural networks. The response was great, and I got several requests to write a tutorial on the subject because what was already out there wasn't all that accessible. This post, written in the form of an executable Jupyter notebook, is that tutorial!<p>Security/ML is a fairly new area of research, but I think it's going to be pretty important in the next few years. There's even a very timely Kaggle competition about this (<a href="https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack" rel="nofollow">https://www.kaggle.com/c/nips-2017-defense-against-adversari...</a>) run by Google Brain. I hope that this blog post will help make this really neat area of research slightly more approachable/accessible! Also, the attacks don't require that much compute power, so you should be able to run the code from the post on your laptop.
> This adversarial image is visually indistinguishable from the original, with no visual artifacts. However, it’s classified as “guacamole” with high probability!<p>May "guacamole" become as prominent as "Alice and Bob".
This is delightful. As someone who uses AI/ML/MI/... for security, I find there is not nearly enough understanding for how attackers can subvert decision systems in practice.<p>Keep up the good work!
I have the feeling that the fact that imperceptible peturbations changes the labels, means that our networks/models don't yet look at the "right" parts of the input data.<p>Hopefully, this means research will focus on more robust classifiers based on weakness identified by adversarial approaches!
I was really inspired by this paper at USENIX [1]. This looks like very <i>very</i> early research, but the outline it provides leaves lots of room for adversarial ML research.<p>Bonus, if you tackle this problem you get several semi-orthogonal technologies for "free".<p>1 - <a href="https://www.usenix.org/system/files/conference/cset16/cset16_paper-kaufman.pdf" rel="nofollow">https://www.usenix.org/system/files/conference/cset16/cset16...</a>