In software testing, there's positive testing to check that software will perform desired behavior, and negative testing to ensure that the software will avoid undesired behaviors.<p>We know that self-driving cars have many thousands of hours of positive testing done. But what I'm wondering is how much adversarial testing is done?<p>For example, to test that the car doesn't get misled by old lane markers when a road is under construction, wouldn't you at least do millions of simulated tests with randomly placed painted lines overlaid on the virtual road?<p>And to test that the car doesn't run into stopped emergency vehicles or tractor trailers crossing its path, wouldn't you at least do thousands of tests on a road containing stopped vehicles?<p>Not to mention if someone accidentally guns the accelerator while approaching a sharp curve. Shouldn't the car know that it's approaching the curve and issue a warning?<p>Does anyone attempt to do the driving equivalent of fuzzing to self-driving cars?