Uber should get charged with manslaughter. I don't say that lightly. This isn't an example of programmer error, where a alpha-version self-driving system failed. That would've been unfortunate, but not grossly negligent. Here, Uber put a "driver" in control of the car that <i>by design</i> wasn't able to avoid collisions with pedestrians. According to the NTSB, Uber disabled <i>Uber's own</i> emergency braking system (not just the built-in Volvo one) “to reduce potential for erratic behavior.”[1] That rises to gross, criminal negligence.<p>It is no defense to say that Uber also told a human driver to be present. If Uber had directed a five year old to drive the SUV, with a back-up driver ready to take control, I think everyone would agree that qualifies as gross negligence notwithstanding the presence of the back-up driver. (To be clear, I think some risk in the name of progress is acceptable. But it's one thing to use testing on public roads to work out the kinks in the software. It's another to put vehicles on the road that <i>you know</i> cannot perform the basic, essential functions of driving.)<p>[1] <a href="http://www.latimes.com/business/autos/la-fi-uber-arizona-ntsb-20180524-story.html" rel="nofollow">http://www.latimes.com/business/autos/la-fi-uber-arizona-nts...</a> ("However, Uber also disabled its own emergency braking function whenever the test car was under driverless computer control, 'to reduce potential for erratic behavior.'").
> The Tempe police report said the crash was "entirely avoidable" if the Uber operator, Rafaela Vasquez, had been watching the road while the car was operating autonomously.<p>And thus should end the bewildering “but you couldn’t see her to the last minute and I don’t understand how bad cameras are in low light”
I'd suspect that it's harder to take over and avoid a colision in a self driving car, than a car that you're continuously in control of. You first have to recognize that the system is failing or about to fail. And that has to happen well in advance for you to take appropriate action. Doesn't seem like a reliable failover procedure, even if the person behind the wheel is paying attention.
A related BBC article[1] states:<p><i>"A toxicology test carried out on Ms Herzberg after the accident returned positive results for methamphetamine and marijuana.<p>She did not look before crossing the road in a poorly lit area and was wearing dark clothes, the NTSB report says. And the reflectors and lights on her bike were at right-angles to the Uber car's path."</i><p>Even though, the self-driving software failed to recognise her, and also totally not excusing the lack of attention of the Uber driver, you cannot rule out that under normal circumstances with a non-automated car, the pedestrian would not have been hit.<p>Although this is a sad event, the pedestrian does carry a certain amount of blame here. It also shows that the biggest blocker to effective self driving vehicles is people, not technology.<p>---<p>[1] <a href="https://www.bbc.co.uk/news/technology-44243118" rel="nofollow">https://www.bbc.co.uk/news/technology-44243118</a>
Contrary to many here, I feel that it's perfectly reasonable to have a backup human driver <i>if they are trained to act accordingly</i>.<p>The Japanese pointing-and-calling technique comes to mind as a good example of keeping drivers engaged: they would have to continuously, actively point at dangers and at the car's appropriate response.<p><a href="https://www.youtube.com/watch?v=9LmdUz3rOQU" rel="nofollow">https://www.youtube.com/watch?v=9LmdUz3rOQU</a> (quite fascinating to watch)<p>Combine this with short sessions (not driving around for hours with nothing to do), and I think the driver would have had a reasonable chance of preventing this accident.
> The Tempe police report said the crash was "entirely avoidable" if the Uber operator, Rafaela Vasquez, had been watching the road while the car was operating autonomously.<p>Absolutely no surprises there. Everyone who has ever taken a picture in the dark should have known this after seeing the footage from the car.
Proposed solution: it should be an offence for the manufacturer to describe a car as "self-driving" or "autonomous" if it is not capable of doing so <i>entirely</i> by itself. Systems which rely on the car driving 99% of the time and then throwing up its hands in order to make the human responsible for the crash are a ridiculous abdication of responsibility.<p>This system would have to be described as "driver assist".
Any vehicle that is not the highest level of autonomy (ie has no steering wheel, or no need for one) should have systems in place to verify that a human driver is alert, in charge of the vehicle, and able to respond immediately - eye focus cameras, steering wheel sensors, confirmation prompts, etc.<p>Is there a valid reason this should be law?
Another facet of this is how quickly people adopt new technologies - whether they're proven to be safe (or good) or not. Personally, I've evolved into not trusting technology, not trusting the people behind most of the technology being produced today.<p>In contrast, this woman was so quickly at ease (I wonder what she was told beforehand, during her training) that she felt comfortable enough to watch TV. I also wonder, as this kind of tech progresses, how it will be sold to the public. Perhaps the same: "we take your security seriously"...
Paying a "safety driver" to sit in the car seems like a small price to pay if it means this (minimum wage?) person takes the manslaughter charge instead of Uber.
Can we put this issue to bed once and for all? Humans are not sufficiently equipped to act as a 'backup driver' in emergency situations and any system which relies on such a thing for safety is inherently unsafe.<p>Doesn't matter if you glue our hands to the steering wheel and hold our eyes open, if we're not doing anything 99% of the time we won't be ready to react with split second timing to recover from some failure.
The NTSB preliminary report directly contradicts this article.<p><a href="https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx" rel="nofollow">https://www.ntsb.gov/news/press-releases/Pages/NR20180524.as...</a><p>On scene police reports are often unreliable. This is why the NTSB does not speculate before the investigation is completed.
I had an Uber driver who was watching an extremely graphic and violent movie in his phone setup directly in his field of view. I had him drop me off early and reported it to Uber. Apparently it’s becoming more and more common talking to friends.
This incident reminds of a post Nicholas Carr wrote. We are offloading critical activities to automation but at critical times human expertise is needed to resolve dangerous situations. The driver put too much confidence in the automation and watched a TV show on her phone.
So they'll drop it on the driver?! The whole setup was an accidental waiting to happen. Uber executives should be held responsible or this will happen again and again.
Why manslaughter though? Wasn't the victim the real offender, crossing a road on unmarked spot without paying attention to oncoming traffic? It's a safety failure of a self-driving car tech surely, but the offense came from the victim. Especially given how some human drivers can't avoid similar situations either (see some videos of Chinese driving)...