Hi HN! I'm one of the researchers that produced this result: we figured out how to make 3D adversarial objects (currently fabricated using full-color 3D printing technology) that consistently fool neural networks in the physical world. Basically, we have an algorithm that can take any 3D object and perturb it so it tricks a classifier into thinking it's something else (for any given target class).<p>Recently, there's been some debate about whether or not adversarial examples are a problem in the real world, and our research shows that this is a real concern, at least with current neural network architectures (nobody has managed to solve the problem of white-box adversarial examples yet).<p>I'm happy to answer any questions that anyone has!