Drivers fault 100%.<p>But Tesla is not blameless with their marketing "Autopilot", "Full self driving", blah blah, giving people a false sense of security. I can't think of a worse problem to try and solve with AI. GPT hallucinates and gives a wrong fact. No biggie, but annoying. Tesla FSD hallucinates and runs over a child, a biggie.
I'd rather the article was more precisely titled "manslaughter" rather than "homicide." As much as in legal jargon, homicide encompasses manslaughter, murder, and a few other things, to many people, homicide is synonymous with murder.
Elon's refusal to adopt lidar - statistically probably fine, anecdotally it could turn out very badly for any one person which is a hard thing to swallow if it's you...
Related video "Tesla Autopilot Crashes into Motorcycle Riders - Why?"[0],
summarized to: vision used by Tesla seems to process motorcycles differently, and may be incorrectly "assuming" the closer spaced brake lights on a motorcycle is actually a far away car.<p>More details on the homicide here[1], which shows the crash happened during daylight hours and the bike resembles a sport bike. This is a different condition than my referenced video (night collisions with cruiser-style motorcycles), but I suspect similar incorrect assumptions by Tesla vision happened.<p>[0]<a href="https://www.youtube.com/watch?v=yRdzIs4FJJg" rel="nofollow">https://www.youtube.com/watch?v=yRdzIs4FJJg</a><p>[1] <a href="https://www.king5.com/article/traffic/traffic-news/tesla-on-auto-pilot-fatal-crash-motorcycle-snohomish-county-driver-says/281-52ae57e6-4950-4d4d-bbe6-7b2f6dbba5d4" rel="nofollow">https://www.king5.com/article/traffic/traffic-news/tesla-on-...</a>
Brand new Model Y with latest software did 4 really dangerous phantom braking stunts. I engaged the system 5 times in total. It’s called enhanced autopilot. I can’t understand how people trust this kind of systems with their lives. Maybe in USA it works much better than elsewhere. But I will never ever turn it again. For the record I didn’t bought it. Got 3 months trial for using referral link.
Independent of the incident itself, the article made it sound like the driver would have benefited from hiring a lawyer before making statements to police.
There have been various discussion over the years of adopting and modernizing the model of Equine law , which dealt with injuries from horse & carriages another type of autonomous / semi-automnomous vehicles.<p>In this case in resolving do the people behind the vehicle share some of the blame.<p>An excerpt from: <a href="https://www.forbes.com/sites/rahulrazdan/2020/01/07/horses-equine-law-and-the-future-of-the-autonomous-vehicle-legal-framework/" rel="nofollow">https://www.forbes.com/sites/rahulrazdan/2020/01/07/horses-e...</a><p>>How does the legal system adapt to new technologies? Generally, this is done by constructing new legal theories that should not conflict with older models and also have characteristics of stability and rationality. What might be the potential legal theories for Autonomous Vehicles? Here are the current candidates:<p>>Negligence: Today, a typical example includes impaired driving. An impaired AV?
>Negligent Entrustment of Vehicles: Here the driver was negligent, but the owner is liable because they should not have trusted the driver. Can you be found negligent if you trust your Tesla AutoDrive?
>Res Ipsa Loquitur: In this theory, (“the thing that speaks for itself”) the accident would not have occurred if not for some action from the plaintiff. By applying this logic, the plaintiff caused the accident because they became startled by an AV homing features because it was surprising.
>Product Liability and Warranty: Are there implied warranties associated when you buy an AV? Can it be proven that some AV vendors are safer than others? If so, do all AV vendors have to come to some standard ?<p>>At this point, it is not clear which theory may apply. However, we may gain insight from a very old body of law — Equine Law. Horses were the original autonomous vehicles and for many centuries, the court system had to deal with horse-related accidents.<p>And while probably not applicable a guy texting, an earlier paper from 2012 which explores an interesting aspect about horses in a frightened state which is akin to the vehicle making its own decision in a crisis scenario:<p>"Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations"<p><a href="https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1170&context=facpubs" rel="nofollow">https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?artic...</a>
Is this worse than humans?<p>(edit) I see that the article included that FSD is 5x safer than humans, which may be valid.<p>The article then said : "However, the only reason it is safer than the US average is that it is supervised by drivers who ideally pay extra attention when using FSD."<p>I am positive that they had zero data to back that assertion.