These guys had to work really, really hard to find a way to fool the machine vision of recent self-driving software!<p>My take: Maybe the machine vision of recent self-driving software has become harder to fool than human vision? Human vision is <i>remarkably easy to fool</i>. See <a href="https://www.ritsumei.ac.jp/~akitaoka/index-e.html" rel="nofollow">https://www.ritsumei.ac.jp/~akitaoka/index-e.html</a> and <a href="https://en.wikipedia.org/wiki/Optical_illusion" rel="nofollow">https://en.wikipedia.org/wiki/Optical_illusion</a> for example.<p>Luckily for all of us, there are no smart people laboring all day long in a lab trying to find new ways to fool human drivers into doing dumb, dangerous things on the road. Human drivers already do that on their own -- no tricks or illusions are necessary.
Interesting, but wouldn't it be less work to cover/remove/deface the sign if you're trying to do something like this? Planting a sneaky black box with an LED would seem super suspicious.
As long as cars are gracefully handling weird visions (like Stop sign in middle of 80mph highway) it should be fine.<p>When I was 1yo my dad was driving on the highway, got distracted, and ended up on a highway portion that was under construction. He was driving full speed, when he saw a barrier, late enough that he swung the steering wheel to avoid the barrier, and ended up driving on 2 wheels before the car felt back and stopped. No baby at the time, my grand-ma was holding me in the back (good old years when cars were lighter and less secure).<p>Random things can happen on the road, including wrong signage. As long as the car can assess the current state and graceful transition to a safe spot, it should be fine to throw at them random signage.
Somewhat confusing because it seems like they didn't test it against any actual self-driving car stack, just some kind of toy version?<p>Who knows what context besides the actual sign itself is used to determine where to stop, for example. Maybe the line on the road, maybe map data, maybe the pole or the appearance of the intersection itself. Maybe the vision system is resilient to this attack in some other way. Maybe the system detects this state and has a fail-safe behavior.<p>Anyway, interesting to research, but unclear how it affects production systems.
It’s good to know the limits AI systems. However this doesn’t mean that we shouldn’t develop self driving. Infrastructure is vulnerable, AI or no. You can attack the infrastructure, there is no counter other than a populous that mostly obeys the rules. People can go out and remove road signs, paint fake lines on the highway or coverup stop signs with plastics bags. Those would be crimes. Intentional interference with a self driving car would also be a crime. A certain amount of trust is required to make society work.
<i>Six boffins mostly hailing from Singapore-based universities have proven it's possible to interfere with autonomous vehicles by exploiting their reliance on camera-based computer vision and cause them to not recognize road signs.</i><p>How do non-camera based systems (lidar etc) get road sign information? I would expect with cameras...?
The only difference between this and <a href="https://xkcd.com/1958/" rel="nofollow">https://xkcd.com/1958/</a> is that this attack confuses cars from certain manufacturers but not human drivers, and I'm not sure that that distinction is important.<p>Is this attack actually in anybody's threat model?
Perhaps sign recognition should be lossy-tolerant multifactor for redundancy: size, shape, orientation, GNSS position against GIS map data, color, text font, etc.
The future is hilarious.<p>- my car is haunted by visions<p>- my computer needs a pep talk to generate some work<p>- my other computer gets upset when I'm wearing sunglasses because it doesn't recognize me