In 2018, road traffic deaths is at 37k in America.<p>If a company invented a self driving car that kills 1,000 people a year, it will never gets allowed on the street. Even 100 a year seems high. But it will actually save tens of thousands.<p>Why are we a lot stricter on self-driving car technology than on humans? Why can't we simply choose the option that saves more life?
I guess It's about liability and reproducibility.<p>When someone does harm with his/her car, holds the civil (and sometimes penal) responsability of his own acts and <i>usually</i> cannot do harm more than once in a very small timeframe.<p>In the case of autonomous IA, the company making/coding the vehicle software will be liable, and the problem is very likely to show again in a short time period.<p>That makes this kind of companies technically on the verge of bankrupcy because they are a good target for class-action lawsuits.
Same reason as we're afraid of flying, I guess.<p>Part of the reason is probably because we're still given captchas asking us to identify lights, buses and zebra crossings.<p>It could also be that it's killing people from a specific bug. A car might be blind to say, someone wearing a black and white shirt, or maybe a green car and 95% ory deaths could be from that.
You cannot tell whether a self-driving car kills less people than human drivers because to get the same statistical significance we have on human driving you'd need 84 billion hours of data from real autonomous driving within a year, and unbiased across all of the roads of the US.