TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Fatalities vs. False Positives: The Lessons from the Tesla and Uber Crashes

116 pointsby szczysalmost 7 years ago

18 comments

evrydayhustlingalmost 7 years ago
Great points regarding aligning precision&#x2F;recall of AI systems with actual human supervision capabilities. One quibble: I hate seeing articles uncritically repeat Uber&#x27;s line that a human driver could not have reacted in the same scenario. The dashboard cam footage they released does not accurately represent either human or car perceptual capabilities [1], and folks recreating the scene have shown that it doesn&#x27;t even look like a good (or unaltered) example of what a dashcam should have perceived [2].<p>[1] <a href="https:&#x2F;&#x2F;ideas.4brad.com&#x2F;it-certainly-looks-bad-uber" rel="nofollow">https:&#x2F;&#x2F;ideas.4brad.com&#x2F;it-certainly-looks-bad-uber</a> [2] <a href="https:&#x2F;&#x2F;dgit.com&#x2F;residents-recreate-crash-uber-car-case-56161&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dgit.com&#x2F;residents-recreate-crash-uber-car-case-5616...</a>
评论 #17340537 未加载
评论 #17341233 未加载
评论 #17340279 未加载
skaalmost 7 years ago
This set of trade-offs in self driving cars is exactly the same one that CAD (computer aided detection&#x2F;diagnosis) systems have been making in medicine for decades. In many cases both type I and type II errors will (with statistical certainty) kill people over enough iterations.<p>It&#x27;s not an easy problem, but the best you can do is demonstrate that the system improves on the standard of care. In other words, overall the trade-off has better outcomes than not using the system. The same will be true for self driving cars if&#x2F;when they reach mainstream use. The important thing is the average performance is improved significantly.<p>It&#x27;s worth noting that, like the CAD systems, focusing too much on fixing an individual error can cause a degradation to the overall system. As these things get better and better, you&#x27;ll have to be much more careful about applying non-obvious fixes, or risk a much worse outcome that doing nothing at all.
评论 #17340509 未加载
评论 #17340685 未加载
评论 #17344120 未加载
niftichalmost 7 years ago
We&#x27;re asking a lot from this kind of software (for good reasons), but humans commit hundreds of leaps of faith of various severity on the roads daily -- failure to yield, failure to maintain a safe following distance, assuming other drivers immediately adjacent to you, like in lanes that are significantly slower than yours, will keep driving safely and carefully. Urban, peak-hour traffic on most US freeways is an exercise in collective insanity, riding people&#x27;s bumpers at 55+ mph (and often significantly higher), leaving little room to stop for incidents [1][2][3][4] or debris.<p>But only a small subset of these situations result in significant accidents, because unimpaired humans, largely, have some intuition for self-preservation. On the other hand, we&#x27;re expecting an algorithm coded by humans to perform better than a complicated bioelectric system we barely understand.<p>Waymo&#x27;s self-driving program has opted to thoroughly understanding its environment, which is why their cars drive in a manner that bears no resemblance to how humans actually drive. We as a society have to eventually reconcile the implications of the disconnect.<p>[1] Unsafe lane change in traffic with different lane speeds: <a href="https:&#x2F;&#x2F;gfycat.com&#x2F;CleanGleefulArawana" rel="nofollow">https:&#x2F;&#x2F;gfycat.com&#x2F;CleanGleefulArawana</a><p>[2] Tailgating causes crash, swerve, multi-vehicle accident: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=j0rj2sZ1KA4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=j0rj2sZ1KA4</a><p>[3] Inattention to incident causes further accidents: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hZL6OKwQGew" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hZL6OKwQGew</a><p>[4] Inattention in slowing traffic causes accident: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Ff7wbSwTuEk" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Ff7wbSwTuEk</a>
评论 #17340495 未加载
评论 #17340494 未加载
评论 #17342863 未加载
评论 #17340758 未加载
Semirhagealmost 7 years ago
<i>If a company is playing fast and loose with the false negatives rate, drivers and pedestrians will die needlessly, but if they are too strict the car will be undriveable and erratic. Both Tesla and Uber, when faced with this difficult tradeoff, punted: they require a person to watch out for the false negatives, taking the burden off of the machine.</i><p>Companies should not be allowed to punt this way, when lives are on the line. This really is a problem in need of a regulatory or legislative solution, as multiple companies prove they don’t have anyone’s interests at heart, but their own. Worse, every pedestrian, cyclist, and other driver on the road who didn’t sign up for this in-the-wild alpha is being drafted to enrich the likes of Tesla and Uber.
评论 #17340290 未加载
评论 #17340056 未加载
simion314almost 7 years ago
I can still see the people commenting that the Tesla driver did not had his hands on the wheel and he was warned when this warring happened 15 minutes before the accident, so the Tesla PR blog blaming the driver had results proving again how hard it is to undo a lie&#x2F;almost lie.
评论 #17340837 未加载
Frickenalmost 7 years ago
A video was released today showing what the classifier in Tesla&#x27;s Autopilot sees, provided by an individual who hacked it. False positives galore:<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;Yyak-U2vPxM" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;Yyak-U2vPxM</a>
评论 #17340085 未加载
评论 #17340668 未加载
评论 #17340101 未加载
hartatoralmost 7 years ago
&gt; In contrast to the Tesla accident, where the human driver could have saved himself from the car’s blindness, the Uber car probably could have braked in time to prevent the accident entirely where a human driver couldn’t have.<p>That&#x27;s a strong statement. Watching the actual video [1] of the Uber accident, it seems both an attentive driver (she seems to be not looking at the road before the crash) or a better decision algorithm, could have prevented the crash. Keep also in mind, videos are very bad at viewing low light shots, and the scene must have been significantly clearer to both the driver and the computer when you&#x27;re actually there.<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=pO9iRUx5wmM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=pO9iRUx5wmM</a>
评论 #17341395 未加载
评论 #17340740 未加载
rhackeralmost 7 years ago
Turning one system off in favor of another is basically admitting that they have no faith in their software. Also isn&#x27;t driving a car with sensors (without self-driving tech) basically a giant data collector? The mere act of driving a car (with a human brain) should help a SDV system understand a false-positive and a false negative simply by learning what a human does in contrast to what the machine thinks it is supposed to do.<p>I imagine this is what separates Waymo and the others. I feel like Waymo is using math&#x2F;neural nets, etc.. whereas Tesla and Uber were probably giant hand-written if&#x2F;else machines. I have nothing to back that up, except that that&#x27;s <i>kinda</i> what this article is about.
评论 #17342397 未加载
ithilglin909almost 7 years ago
It seems like self-driving cars may not (and maybe should not) become commonplace until the roads are built for self-driving cars – that might be a better thing to focus on, instead of systems that mimic human judgment.
评论 #17340121 未加载
评论 #17341276 未加载
评论 #17340325 未加载
评论 #17341417 未加载
bsaulalmost 7 years ago
I think this is one of the first time in my life i see marketing (calling something « autopilot » which it is not) and overhype (we have to be the first to have self driving cars, now !) actually kill people and not just rip people or investor off.<p>I think history will judge those companies very severely, and i’m wondering if justice isn’t going to be just as severe right now.
评论 #17342692 未加载
raszalmost 7 years ago
This certainly explains reports of almost every Google car accident being the case of human rear-ending it. Google did it right from the start, didnt gamble with false positives.
Shivetyaalmost 7 years ago
I will hold to my opinion stated before, these systems are not safe enough to be on public roads as of yet let alone in the hands of consumers.<p>that out I still see no reason why we don&#x27;t adapt limited access roadways to support self driving in that realm. a large number of cities have dedicate HOV and (express) toll lanes that can all be adopted and marked much easier to support self driving capability in a controlled environment. plus its probably a good chance than both manufacturers of cars as well as AD systems along with drivers themselves would be willing to pay more for such access. I just recall the imagery from the old William Shatner series Tekwar that showed a similar approach, get on highway and car takes you into the city. It could even then be extended to service event centers so that it continue driving for you till parked; think off expressway to airports and stadiums
madroxalmost 7 years ago
I wrote about this recently as well. This is actually going to be more of a concern as we incorporate models into everyday products. Stakeholders and QA need a solid understanding of Type I and Type II error so they can assess how much risk they&#x27;re willing to take on and make that part of the quality process.<p>Shameless plug: <a href="https:&#x2F;&#x2F;blog.d8a.me&#x2F;the-qa-of-stochastic-processes-a15a9406519c" rel="nofollow">https:&#x2F;&#x2F;blog.d8a.me&#x2F;the-qa-of-stochastic-processes-a15a94065...</a>
fcolasalmost 7 years ago
Nice article; thanks.<p>* * *<p>. Tesla claims: A 40% crash rate reduction with the autopilot as compared to no autopilot, over an 18 month period [1].<p>. If this is true—and we could imagine it is (at least partially?)—then Elon&#x27;s remark to journalists would make sense:<p>&quot;It&#x27;s [..] irresponsible [..] to write an article that [..] lead people to believe that autonomy is less safe,” [..] “Because people might actually turn it off, and then die&quot; [1]<p>* * *<p>But to have an opinion about the autopilot&#x27;s risk statistics I would also need to know: a) What populations (data) they compare; b) How each population is defined (inclusion and exclusion criteria); c) What&#x27;s the sample size (18 months, and?); d) Who makes these calculations (to clearly identify possible conflicts of interests).<p>- Not sure if this type of data is publicly available?<p>* * *<p>Actually the National Highway Traffic Safety Administration (NHTSA) seems to indicate[1]: 1) that the data comes from Tesla—cf. point d) =&gt; conflict of interest; 2) Autopilot on&#x2F;off was NOT used for the risk statistics—although it&#x27;s central (point a); 3) instead the &quot;40%&quot; would measure the &quot;number of airbag deployments per million miles&quot; which is a proxy-metric that&#x27;s not directly related to car accidents.<p>- Hey, this is odd (it&#x27;s definitively not a Science or Nature method protocol).<p>* * *<p>. &quot;The Insurance Institute for Highway Safety suggests: A &quot;13%&quot; reduction in collision claim frequency, indicating sedans with Autopilot enabled got into fewer crashes that resulted in collision claims to insurers.&quot;<p>However it&#x27;s a small difference and there&#x27;re possible confounders like social status (a &quot;Tesla driver&quot;), gender, geographical area, and usually the confounders have a large influence on experiments, so it&#x27;s unlikely that this (small) 13% difference would remain if we adjust for confounders...<p>* * *<p>Here&#x27;s the article AARIAN MARSHALL on Wired. [1] <a href="https:&#x2F;&#x2F;www.wired.com&#x2F;story&#x2F;tesla-autopilot-safety-statistics&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wired.com&#x2F;story&#x2F;tesla-autopilot-safety-statistic...</a>
LinuxBenderalmost 7 years ago
Could any of the fatalities be related to the malicious code commits? [1]<p>[1] - <a href="https:&#x2F;&#x2F;www.fastcompany.com&#x2F;40586864&#x2F;read-elon-musks-email-alleging-there-is-a-saboteur-at-tesla" rel="nofollow">https:&#x2F;&#x2F;www.fastcompany.com&#x2F;40586864&#x2F;read-elon-musks-email-a...</a>
dsfyu404edalmost 7 years ago
&gt;It may seem cold to couch such life-and-death decisions in terms of pure statistics, but the fact is that there is an unavoidable design tradeoff between...<p>The fact that we as a society can&#x27;t have an adult discussion about that topic is a large part of why it&#x27;s so hard to strike the right balance.
tzakrajsalmost 7 years ago
Good thing L5 self-driving can be done without low-latency LIDAR and that Tesla didn&#x27;t defraud many Model S owners by selling them cars fundamentally incapable of providing L5 self-driving.<p>&#x2F;s
评论 #17359965 未加载
sgsloalmost 7 years ago
The upside of self driving vehicles is so immense that I can&#x27;t help but find in favor of giving leeway to companies developing this tech. I would vote legislation limiting the accident liability of &#x27;qualified&#x27; companies developing self driving tech. No, I don&#x27;t know what separates &#x27;qualified&#x27; from &#x27;not qualified&#x27;.<p>Try looking at this a different way: there were 37,461 vehicle related fatalities in 2016 in the US alone. In the current risk-averse climate, would we - over the early 1900&#x27;s - ever allowed the development of public infrastructure to support an invention leading to so many fatalities? Likely not, but the benefit we enjoy from motorized transportation far outweighs the cost of those 37k yearly lives. The point is that continuing with a risk-averse, liability-obsessed development culture will stifle invention that can otherwise lead to great quality of life improvements.
评论 #17340275 未加载
评论 #17340249 未加载
评论 #17340846 未加载
评论 #17341436 未加载
评论 #17341240 未加载
评论 #17341013 未加载
评论 #17340366 未加载