Looks like a giant pile of fear, uncertainty, and doubt. Even if we only wind up with cars that drive better than bad drivers and worse than good ones, that's a win - now the driving algorithm is debuggable and improvable software, and the gains can be generalized across the entire fleet.<p>"How to drive safely" is a skill that can be learned once - by an explicit algorithm - and then shared across every vehicle that needs it.
Not a strong article, but the comments are worthwhile reading. I also think the main claim that safety hasn't been tested enough to state much about it is correct. One thing is that there is selection in where and when to test self-driving cars. In real life, people will not be happy to get a message "sorry, I will not drive in this weather", if it is given frequently.
Maybe, no, yes.<p><a href="https://news.ycombinator.com/item?id=8699791" rel="nofollow">https://news.ycombinator.com/item?id=8699791</a><p>For most things they'll probably be safer than the average human driver. But then they'll fail in weird ways that humans would never do and somebody will die when their automatic car drives them through the back of their house because of some weird one-off weather condition and people will get scared of them.
Robocars CAN be more safe: "just" a matter of getting all the sensors/software/harware reliable, redundant and error free. If we are "there yet" I don't know.
The comments in the article show a lack of thought and understanding; just listing fears someone may have about a new driver and applying them to automated cars instead.<p>Some genuine concerns would be those below. In my opinion the benefit still outweighs these arguments (as most of these could also be applicable to a regular car driven by a person), but if someone wants to have a go at self-drive cars, here's some ammo instead of blanks:<p>- Malicious Hacking - People with malicious intent can hack the cars by using devices to confuse/misinform their sensors, or by finding ways to amend the car's code (perhaps amending at source, perhaps by patching the car with the malicious code). This isn't something a layman could do - so the risk of an idiot loner is small; however if all of our infrastructure moves to AI cars the payoff could attract some organised and skilled attacks.<p>- Poor Maintenance - cars would self diagnose and ensure issues are dealt with promptly; however if people are still doing the repair work they may make mistakes which the car cannot see (at some point the car has to assume it can trust its sensors) - so those mistakes may cause unpredictable issues.<p>- Geek Play - If you had an AI car and knew a bit about coding, how tempted would you be to put some of your own code in there - maybe just to give it KIT's accent. Most of the car's controls would be locked down, but people would look to jailbreak their cars so they could do such customisations, which may have unforeseen side effects. This would be illegal, but some would still try.<p>- EMP - there's a concern that EMPs can be caused naturally by solar flares, which could have significant impact on our infrastructure. If not properly shielded, cars could be affected by such issues: <a href="http://www.forbes.com/sites/tombarlow/2011/06/23/huge-solar-flares-could-spell-catastrophe-for-earth/" rel="nofollow">http://www.forbes.com/sites/tombarlow/2011/06/23/huge-solar-...</a><p>- Refusal to drive unsafely - in an emergency it's OK to break the rules (this person will die if we don't get them to hospital in 10 minutes). An AI car will be designed to put safety first, so likely won't allow you to break the rules even in such a situation. If the rules can be broken in such a scenario, people could just declare emergencies when running late. Perhaps a compromise is to allow overrides such as "get to nearest hospital asap" rather than just a generic override, and to penalise anyone misusing this facility - however there will always be unforeseen edge cases (though these will be rapidly diminishing & the cars will still be far preferable to humans given they'll at least prioritise public safety).<p>- Unforeseen special cases - Y2K, Y2K38, Patriot missile clock, north pole sub divide zero coordinates bug, etc are examples of issues caused by oversight: more here <a href="http://listverse.com/2012/12/24/10-seriously-epic-computer-software-bugs/" rel="nofollow">http://listverse.com/2012/12/24/10-seriously-epic-computer-s...</a>. None are likely to be global catastrophes, but we can never rule out possibilities of issues lurking for years unnoticed.