Last week, I wrote a blog post (<a href="https://blog.openai.com/robust-adversarial-inputs/" rel="nofollow">https://blog.openai.com/robust-adversarial-inputs/</a>) about how it's possible to synthesize really robust adversarial inputs for neural networks. The response was great, and I got several requests to write a tutorial on the subject because what was already out there wasn't all that accessible. This post, written in the form of an executable Jupyter notebook, is that tutorial!<p>Security/ML is a fairly new area of research, but I think it's going to be pretty important in the next few years. There's even a very timely Kaggle competition about this (<a href="https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack" rel="nofollow">https://www.kaggle.com/c/nips-2017-defense-against-adversari...</a>) run by Google Brain. I hope that this blog post will help make this really neat area of research slightly more approachable/accessible! Also, the attacks don't require that much compute power, so you should be able to run the code from the post on your laptop.