None of the top comments make any sense to me. I simply don't understand them. Anyway, here is my understanding of the subject:<p>(1) Yes, Uber should have trained the driver better to look at the road. By trained, I mean there should have been a sticky note on the wheel saying "PAY ATTENTION OR YOU WILL KILL SOMEONE"<p>(2) Yes, The driver absolutely should have to pay some penalty for this, if Uber told him that he should have been paying attention (Which they most certainly did). Watching a video while in a self-driving car is IDENTICAL to watching a video in a normal car. Modern self-driving cars are NOT fully autonomous, and they SHOULD be viewed as IDENTICAL to cruise control for all legal considerations, and thought experiments. Most of the top comments, which are blindly attacking self-driving cars, are not making this analysis.<p>However, (1) and (2) do not justify the lack of logic displayed by most of the top comments here. There seems to be violations of<p>(a) There is no logical difference between a person accidentally killing someone, and the self-driving car accidentally killing someone. Actually, because the car is already known to not be fully autonomous, this already is a case of the person accidentally killing someone. However, even if the car is fully autonomous, we MUST be considering the ODDS of an accident. None of the other comments are doing this. There is always an odds of an accident, so a specific accident means absolute bullshit. Literally nothing. This post doesn't even mean anything. What SHOULD be posted is "Self-driving cars with humans at the wheel kill X people per road-hour. Human-only cars kill Y people per road-hour". If X > Y, then yes we have a fking problem. But without that information, we literally have nothing to even think about, or process.<p>(b) Disabling the safety feature of the car is not a fk'ing concern, at all. Almost no cars have these features - only expensive ones do. Turning an expensive car into a normal car is NOT something you can be sued for, or is even something anyone should care about. Who is at blame for the situation is simply not dependent on this fact. I don't understand why people are discussing this aspect.<p>(c) Do you see the image? Maybe it's not showing it all. But as far as I can see there isn't a light there. Wtf is she doing crossing the road if there's no light there, without waiting for the cars? As a citydweller that is constantly found in the middle of streets trying to cross parts of the road that don't have stop lights, I just can't fathom ever being in her position. Maybe my city has less considerate drivers than her city, but if I tried to cross roads without a red light blocking cars, and didn't consciously give right of way to the traffic, I would die sometime this week.<p>(d) Many people seem to quote "self-driving system classified the pedestrian as an unknown object, then as a vehicle, and then as a bicycle" as "Oh this self-driving system is complete sh*t it's all Uber's fault". Makes no damn sense. As a side note, if you've ever worked with a neural net, especially with video as opposed to still photos, you already understand that the given sentence means nothing. That's just how they work, and there will always be milliseconds in-between frames where it reassigns the object's identification. But, anyway, this is not relevant. The self-driving part could have been completely off, or disabled. The car is SUPPOSED to be driven by the driver, and any deviation from that is the fault of either the driver, or Uber's training of the driver. Whether it's the former, or the latter, is exactly where and how suing should be directed and handled. Nothing else matters, despite most comments putting much emphasis on many other aspects of the situation.