I'm fascinated by the accidents. The AV is stopped at a light. Someone rear-ends it. Minimal damage.<p>Probably similar accidents are occurring every minute between human drivers, going unreported as the rule.<p>AVs might one day even avoid this "victimization," if these events keep following a predictable pattern. AVs could exaggerate the gap, leave a precisely calibrated amount of extra space. When anticipating a rear end collision, the AV would honk and flash brake lights while scooting forward.<p>Google's absolutely correct that its AVs are never at fault in any of these accidents, legally speaking. Does blame change though if there are ways the AI can prevent this series of similar accidents, but they choose not to?<p>The AV yields to those running a red light, even though getting t-boned wouldn't legally be the AV's fault. That seems wise to me. Is it inconsistent to expect the AV to avoid getting t-boned, but not expect it to avoid getting rear-ended? I'm not sure...<p>Or, more broadly: How do you divide blame between two parties when one has superhuman faculties? Is the AI responsible for everything it could have conceivably been programmed to prevent? Or do you just hold it to a human standard?<p>Like all hard problems, neither extreme is very satisfying.