I know that people who do this seriously for a living hate armchair speculations about trolley problems and other dumb and misleading narratives that in effect sabotage valuable research.<p>However, I'd wonder if progress on the beneficial use case has to come from solving first for malign one, where the only way to get to "kill no humans," is to start with "kill all humans." Sort of like throwing yourself at the ground, but missing.<p>Even to the point of where cars learn to set traps for humans, where we think it's parked, and then it pounces and runs them down, misdirects them into thinking they're turning one way, and then swerving to take them out. Maybe the infotainment system has access to social media data that implies someone will be in a place at a given time, and like Appointment in Samara, there is no avoiding your fate, determined at birth, of meeting your maker via being run down by an autonomous car. They could even co-ordinate, where one can signal to the other that a cyclist is coming, flash its lights and honk, while the other opens its door into the cyclists path. They could hunt in packs, where data about pedestrians they didn't hit gets passed to oncoming vehicles who might still have the opportunity. The presence of a school in the area would enable cars to reduce human populations by both preventing them reaching maturity, but also imposing catastrophic costs on humans who had already invested in making new ones. They could linger around known bus routes at peak times and co-ordinate to select routes based on their lifecycle whether to sacrifice one of their own against many humans on a bus.<p>These are obviously horrific, but the counter-cases for them yield features and that pop out of serial 1:1 sensor array development, and into a general strategy for "self driving," that is not a replication of human abilities at all, but an entirely new logic of transportation that optimizes for risk reduction, based on the contra strategies for the hunter/killer use case.<p>I'd wonder if to make anything remotely human or resembling a living being at all, you need to equip it with the capabilities of a successful predator in its environment, and then have it choose to not exercise them for some higher order incentive.