TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Self-driving cars: Who to save, who to sacrifice?

1 pointsby adamrmcdabout 6 years ago

1 comment

informatimagoabout 6 years ago
A lot of time will pass with actual self-driving cars, before AI reaches the point where such moral dilemmas are even taken into account.<p>And the first question is whether such moral considerations should be included into what is essentially tools or mechanisms, ie. we&#x27;re not talking about GAI here.<p>So basically, Tesla&#x27;s autopilot works by &quot;dumb&quot; machine learning where the artificial neural networks are built to recognize images from the cameras, and determine driving commands from them. This learning is based on a big quantity if examples, recorded from existing Tesla cars, driven manually or automatically.<p>The important point is the big number of examples needed to learn a classification of the situation and derive a driving command.<p>Already, self driving cars have less accidents than human drivers.<p>So let me ask: WHEN will we have a big enough number of morally questionable situation examples, before we can teach to those artificial neural network what choice to make (and clearly, this will be human programmers who will make the choice, by labeling the situations and indicating what choice must be made)?<p>This will never occur.<p>All those examples are theoretical. The rare cases where a human driver can find himself in such a situation where he can turn left to kill a kid or turn right to kill a old man, are situations in which a self-driving car would never find itself in, because it would have reacted differently before the situation occurs.<p>In practice, since there will be no learning, there will be a random bias, a determination that will be done considering other criteria, and over a thousand of years of self driving, when such a case where a moral choice could have been done, the choice will just happen to be random, and can very well be actually the right choice. But it should not matter.