NTSB:<p><i>• At 8 seconds prior to the crash, the Tesla was following a lead vehicle and was traveling about 65 mph.</i><p><i>• At 7 seconds prior to the crash, the Tesla began a left steering movement while following a lead vehicle.</i><p><i>• At 4 seconds prior to the crash, the Tesla was no longer following a lead vehicle.</i><p><i>• At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected.</i><p>This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera and Mobileye (at least in early models) vision software. It also recognizes lane lines and tries to center between them. It has a low resolution radar system which ranges moving metallic objects like cars but ignores stationary obstacles. And there are some side-mounted sonars for detecting vehicles a few meters away on the side, which are not relevant here.<p>The system performed as designed. The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.[1] If the vehicle ever got into the gore area, it would track as if in a lane, right into the crash barrier. It won't stop for the crash barrier, because <i>it doesn't detect stationary obstacles.</i> Here, it sped up, because there was no longer a car ahead. Then it lane-followed right into the crash barrier.<p>That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. <i>That is by design.</i> This is not an implementation bug or sensor failure. It follows directly from the decision to ship "Autopilot" with that sensor suite and set of capabilities.<p>This behavior is alien to human expectations. Humans intuitively expect an anti-collision system to avoid collisions with obstacles. This system does not do that. It only avoids rear-end collisions with other cars. The normal vehicle behavior of slowing down when it approaches the rear of another car trains users to expect that it will do that consistently. But it doesn't really work that way. Cars are special to the vision system.<p>How did the vehicle get into the gore area? We can only speculate at this point. The paint on the right edge of the gore marking, as seen in Google Maps, is worn near the point of the gore. That may have led the vehicle to track on the left edge of the gore marking, instead of the right. Then it would start centering normally on the wide gore area as if a lane. I expect that the NTSB will have more to say about that later. They may re-drive that area in another similarly equipped Tesla, or run tests on a track.<p>[1] <a href="https://goo.gl/maps/bWs6DGsoFmD2" rel="nofollow">https://goo.gl/maps/bWs6DGsoFmD2</a>
“[Driver’s] hands were not detected on the steering wheel for the final six seconds prior to the crash. Tesla has said that Huang received warnings to put his hands on the wheel, but according to the NTSB, these warnings came more than 15 minutes before the crash.”<p>This kind of stuff is why I’ve lost all faith in Tesla’s public statements. What they said here was, for all intents and purposes, a flat out lie.<p>Clearly something went wrong here, but they lept to blaming everyone else instead of working to find the flaw.
<i>> During the 18-minute 55-second segment, the vehicle provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. These alerts were made more than 15 minutes prior to the crash.</i><p>Whoah. So there were NO alerts for 15 minutes prior to the crash. Compare this with Tesla's earlier statement:<p><i>> The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.</i>[1]<p>This gives a very different impression. They omitted the fact that there were no warnings for 15 minutes. Frankly that appears to be an intentionally misleading omission.<p>So basically the driver was distracted for 6 seconds while believing that the car was auto-following the car in front of it.<p>[1] <a href="https://www.tesla.com/blog/update-last-week’s-accident" rel="nofollow">https://www.tesla.com/blog/update-last-week’s-accident</a>
Reading that initial report is terrifying. I am so glad the NTSB set the record straight that the driver had his hands on the wheel for the majority of the final minute of travel. Really makes me feel like Tesla was out to blame the driver from the get go. To be clear the driver is absolutely partially at fault, but my goodness autopilot sped up into the barrier in the final seconds — totally unexpected when the car has automatic emergency breaking.<p>Emergency breaking feels not ready for prime time. I hope there are improvements there. Don’t want to see autopilot disabled as a result of this, would rather Tesla use this to double down and apply new learnings.<p>Just so sad to hear about this guys death on his way to work - not the way I want to go. :(
> <i>His hands were not detected on the steering wheel for the final six seconds prior to the crash.</i><p>> <i>Tesla has said that Huang received warnings to put his hands on the wheel, but according to the NTSB, these warnings came more than 15 minutes before the crash.</i><p>> <i>Tesla has emphasized that a damaged crash attenuator had contributed to the severity of the crash.</i><p>These may or may not have been factors contributing to the death of the driver, and ultimately may or may not absolve Tesla from a legal liability.<p>However, the key point here is that without question, <i>the autopilot failed</i>.<p>It is understandable why Tesla is focusing on the liability issue. This is something that <i>they can dispute</i>. The fact that the autopilot failed is <i>undisputable</i>, and it is unsurprising that Tesla is trying to steer the conversation away from that.<p>The discussion shouldn't be <i>either</i> the driver is at fault <i>or</i> Tesla screwed up, but two separate discussions: whether the driver is at fault, <i>and</i> how Tesla screwed up.
The report itself - worth of reading<p><a href="https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18FH011-preliminary.pdf" rel="nofollow">https://www.ntsb.gov/investigations/AccidentReports/Reports/...</a>
Despite the autopilot failure, I find the battery failure quite remarkable too:<p>> The car was towed to an impound lot, but the vehicle's batteries weren't finished burning. A few hours after the crash, "the Tesla battery emanated smoke and audible venting." Five days later, the smoldering battery reignited, requiring another visit from the fire department.<p>Where is your LiPo god now? Batteries have more energy density than 20 years ago, ok. But they are also much more dangerous. Now imagine the same situation with Tesla's huge semi batteries. They'll have to bury them 6ft under, like Chernobyl's smoldering fuel rods. Minus the radiation.
Dear Elon, want to start a website that rates how fake-newsy government-produced accident reports are? /S<p>"FDA said my farm is producing salmonella-infected chicken. Downvote their report on this URL!"
I am generally against often-called "excessive regulation," but the regulator -- perhaps FTC -- should aggressively prohibit the misleading marketing message here.<p>The entire problem manifests from calling this lane keeping mechanism "Autopilot." Tesla should be prohibited from using that language until they have achieved a provably safer self-driving level 3+.<p>The problem is exacerbated by Musk's aggressive marketing-driven language. Saying things like <i>we're two years out from full self-driving</i> (first said in 2015) and <i>the driver was warned to put his hands on the steering wheel</i> (15 minutes prior to the crash) makes Musk look like he is plainly the bad guy and attempting to be misleading.<p>"Provably safe" probably means some sort of acceptance testing -- a blend of NTSB-operated obstacle course (with regression tests and the like) and real world exposure.
Tesla Autopilot makes it to HN pretty much every week now, almost never in a good way.<p>Every time, we have a big discussion about autopilot safety, AI ethics, etc.<p>What about <i>lack of focus</i>?<p>Tesla has already reinvented the car in a big way--all-electric, long range, fast charge, with a huge network of "superchargers". It's taken EV from a niche environmentalist pursuit to something widely seen as the future of automotive.<p>Why are they trying to tackle self-driving cars at the same time?<p>This feels like a classic mistake and case of scope creep.<p>Becoming the Toyota of electric is vast engineering challenge. Level 5 autonomous driving is an equally vast engineering challenge. Both represent once-in-a-generation technological leaps. Trying to tackle both at the same time feels like hubris.<p>If they just made great human-piloted electric cars and focused on cost, production efficiency, volume, and quality, I think they'd be in a better place as a business. Autopilot seems like an expensive distraction.
Tesla has to realize these "shame the dead dude" posts are PR nightmares, right?<p>They are reason alone for me to never consider one, that a private moment for my family might end up a pawn in some "convince the public we're safe using any weasel stretch of the facts we can" effort.<p>If this is disruption, I'll wait for the old guard to catch up, lest I be disrupted into a concrete barrier and my grieving widow fed misleading facts about how it happened.
After this incident and Tesla's response to it, I hope Tesla is sued and or fined into bankruptcy. Tesla is normalizing releasing not fully tested software to do safety-critical things, and literally killing people as a result. A message needs to be sent that this is unacceptable. In addition, their first response is a PR driven response that sought to blame to driver, and violated NTSB procedures. Safety is probably the most important thing to get right with these types of software and Tesla is nonchalantly sacrificing safety for marketing.
Tesla Autopilot should be recalled via the next OTA update.<p>The “Autopilot” branding implies that users need not pay attention, when in reality, the system needs interventions at infrequent but hard-to-predict times. If an engineer at Apple can’t figure it out, then the average person has no chance. Their software sets users up to fail. (Where failure means permanent disability or death.)<p>Inevitably, Musk fans will claim that recalling Autopilot actually makes Tesla drivers less safe. But here's the problem with Musk’s framing of Autopilot.<p>Sure, maybe it fails less often than humans. (We don't know whether we can trust his numbers.) But we do know that when it fails, it fails in different ways — Autopilot crashes are noteworthy because they happen in situations where human drivers would have no problem. That’s what people can’t get over. And it is why Autopilot is such a dangerous feature.<p>An automaker with more humility would’ve disabled this feature years ago. (Even Uber suspended testing after the Arizona crash!) With Musk, my fear is that more people will have to die before there is enough pressure from regulators / the public to pull the plug.
So people are asking why the barrier wasn’t detected, and that’s fair.<p>Here’s another question: why wasn’t the ‘gore’ zone detected?<p>Why did the car thing it was safe to drive over and area with striped white lines covering the pavement?<p>It saw the white line on the <i>side</i> of that area and decided that was a land market but ignored the striped area you’re not supposed to drive on?<p>If you’re reading the lines on the pavement you have to try to look at all of them.<p>I don’t know if other cars, like those with MobileEye systems, do that but given Tesla’s safety claims they’d better be trying.
Here's the most interesting quote to me:<p>"<i>The crash created a big battery fire that destroyed the front of Huang's vehicle. "The Mountain View Fire Department applied approximately 200 gallons of water and foam" over a 10-minute period to put out the fire, the NTSB reported.</i><p>"<i>The car was towed to an impound lot, but the vehicle's batteries weren't finished burning. A few hours after the crash, "the Tesla battery emanated smoke and audible venting." Five days later, the smoldering battery reignited, requiring another visit from the fire department.</i>"<p>Shouldn't it be possible to make the battery safe?
This just reconfirms my belief about Tesla's "autopilot" --- most of the time it behaves like an OK driver, but occasionally makes a fatal mistake if you don't pay attention and correct it. In other words, you have to be <i>more</i> attentive to drive safely with it than without, since a normal car (with suspension and tires in good condition, on a flat road surface) will not decide to change direction unless explicitly directed to --- it will continue in a straight line even if you take your hands off the wheel.<p>Given that, the value of autopilot seems dubious...
This guy tested it at the EXACT same location with tesla autopilot. The Tesla starts steering directly into the barrier before he corrects it.<p><a href="https://www.youtube.com/watch?v=VVJSjeHDvfY" rel="nofollow">https://www.youtube.com/watch?v=VVJSjeHDvfY</a>
Disclaimer: Taboo comment ahead.<p>Subtle bugs in self driving cars would be a simple way to assassinate people with low cost overhead. One OTA update to a target and you could probably even get video footage of the job being completed, sent to the client all in one API call.<p>Surely by now someone must have completed a cost analysis of traditional contractors vs. having a plant at a car manufacturer.<p>Am I the only one thinking about this?
self driving systems cant well reason about untrained scenarios or the intent of other humans on the road. I think the people have grossly underestimated how driving in an uncontrolled environment is really a general AI problem, which we're not even close to solving.
<i>Involuntary manslaughter usually refers to an unintentional killing that results from recklessness or criminal negligence, or from an unlawful act that is a misdemeanor or low-level felony (such as a DUI).</i> (Wikipedia)<p>It's rather uncontroversial that this kind of accident falls under civil law, because there is some degree of liability involved in marketing a product as being safer than a human driver, but then fails in an instance where a human driver flat out would not fail: apples to apples. If the human driver is paying attention, which the autonomous system is always doing, they'd never make this mistake. It could only be intentional.<p>But more controversial and therefore more interesting to me, is to what degree the system is acting criminally, even if it's unintended, let alone if it is intended. Now imagine the insurance implications of such a finding of unintended killing. And even worse, imagine the total lack of even trying to make this argument.<p>I think a prosecutor must criminally prosecute Tesla. If not this incident, in the near future. It's an area of law that needs to be aggressively pursued, and voters need to be extremely mindful of treating AI of any kind, with kid gloves, compared to how we've treated humans in the same circumstances.
Wow. I will say that, when you look straight-on in Street View, it does look disturbingly like a valid lane to drive in -- same width, same markings at one point [1]:<p><a href="https://www.google.com/maps/@37.4106804,-122.075111,3a,75y,117.92h,81.35t/data=!3m6!1e1!3m4!1snAoBJlvBLm0NQWYBWKxWGw!2e0!7i16384!8i8192" rel="nofollow">https://www.google.com/maps/@37.4106804,-122.075111,3a,75y,1...</a><p>If it were night and a car in front blocking view of the concrete lane divider, it doesn't seem too difficult for a human to change lanes at the last second and collide as well. (And indeed, there was a collision the previous week.)<p>There's no excuse for not having an emergency collision detection system... but it also reminds me how dangerous driving can be period, and how we need to hold autonomous cars to a higher standard.<p>[1] Thanks to comments by Animats and raldi for the location from other angles
<p><pre><code> The NTSB report confirms that. The crash attenuator—an accordion-like barrier that's supposed to cushion a vehicle when it crashes
into the lane separator—had been damaged the previous week when a Toyota Prius crashed at the same location.
The resulting damage made the attenuator ineffective and likely contributed to Huang's death.
</code></pre>
kinda sounds like maybe that part of the road isn't well designed or marked too.
> During the 18-minute 55-second segment, the vehicle provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. These alerts were made more than 15 minutes prior to the crash.<p>If your hand is always supposed to be on the wheel, why is does the car not constantly alert you when it detects that your hands are off (similar to how cars beep at you if your seatbelt is unbuckled while driving)?
I think one of my main concerns with "autopilot" is that for <i>a lot</i> of drivers, it will absolutely make the roads safer for them and those that use the roads around them. Consequently, for some safer and more-alert drivers, it has the potential to make driving less safe.
Here's a relevant video that shows autopilot directing a Tesla into lane split.<p><a href="https://www.youtube.com/watch?v=6QCF8tVqM3I" rel="nofollow">https://www.youtube.com/watch?v=6QCF8tVqM3I</a>
If I was building this, I would upload millions of hours of data from actual Tesla drivers, and I would have autopilot releases step through data and flag the variances from the behavior of the actual drivers. I'd run this in a massively parallel fashion.<p>For every release, I'd expect the score to improve. With a system like this, I would think you'd detect the "drive towards traffic barrier" behavior.
I was listening to a Software Engineering Daily podcast with Lex Fridman about self-driving deep learning.
Very interesting topic on ethics of self-driving cars. What he was saying, is that we need to accept the fact that people are going to die following incidents with autonomous vehicles involved. In order for systems to learn how to drive, people will have die. It's more of societal change that is needed. 30,000 people die on the roads in US every year, in order to decrease that number we need self-driving cars even with a price that society as of now can't accept
Short version: due to poor lane markings, Autopilot made the same mistake as many humans in the same situation and collided with the divider. Due to the frequency of this kind of accident, the crash attenuator had been collapsed and not reset meaning the Tesla hit the concrete divider at full speed, as has happened in the past with humans in control.<p>But please continue to blame Autopilot for not being smarter than the human operating the vehicle.