Its nonsensical to think a human can take over at a moment's notice during a particular maneuver. Its the threat of death/accident that makes me pay attention to every moment of driving. On autopilot I wont have that kind of attention. Its simply a trait of the human attention span.<p>That being said, every self-driving car needs a manual override because there are plenty of situations that cant be easily explained to a computer. the override could be done with a steering wheel and gas pedal, or just a joy-stick mode on the touchscreen pad.<p>For instance someone has a big empty lot and they want to move a car from row A to row B. The new spot is exactly 50 feet to the right. I just want to back the car up, and go forward while steering over to the right a bit. Row B has no 'street address' to program into a route-computer. I could click on a map but the map doesnt know where on the grass i want to drive.<p>Even with lidar, etc. there are grey areas that self-driving algorithms are going to have trouble with. Drop a sheet of tissue in the path of a self-driving car moving at 60MPH. The lidar will see it as a solid object and might decide a very dangerous plan - maximum braking - is the best plan, when going through it is actually the best plan.
CA DMV's position is basically "let's try it with human backup for 3 years, then re-evaluate". That's reasonable enough.<p>Tesla may have set the field back. They released their "autopilot" as a beta, didn't provide sensors to enforce driver hands-on-wheel, shipped a system that works well only on freeways but will engage on other roads, and claimed that if it crashes, it's the driver's fault. Everybody else shipping similar capabilities (BMW, Mercedes, Cadillac, etc.) has put more restrictions on them (such as a hands-on-wheel sensor) to avoid over-reliance on the automation. Now Tesla has had to remove some features from their "autopilot".[1]<p>Google may be able to get approval for their 25MPH max speed mini-car operating without driver hands-on-wheel, on the basis of the slow speed. Google's system is much more advanced, and has a detailed model of what everything else moving around it is doing.<p>Expecting the human driver to take over if the automation fails will not work. The limits of human monitoring are well known from aircraft automation. It takes seconds to tens of seconds for a pilot to grasp the situation and recover properly if the autopilot disconnects in an upset situation. That's for pilots, who are selected and trained much better than drivers, and who go through elaborate simulator training in which failures are simulated. Watch "Children of the Magenta"[2], where a chief pilot at American Airlines talks to his pilots about this.<p>[1] <a href="http://www.bizjournals.com/sanjose/news/2015/12/16/tesla-to-limit-self-driving-functions.html" rel="nofollow">http://www.bizjournals.com/sanjose/news/2015/12/16/tesla-to-...</a>
[2] <a href="https://www.youtube.com/watch?v=pN41LvuSz10" rel="nofollow">https://www.youtube.com/watch?v=pN41LvuSz10</a>
> That cautious approach requires that the cars have a steering wheel, and a licensed driver must be ready to take over if the machine fails.<p>The article takes the tone that this will drastically limit the technology, but honestly at this point a restriction like this seems completely reasonable.<p>The cars are <i>not good enough</i> to completely drive themselves in all scenarios. At times, they will decide they can't handle a situation and safely ask a driver to take over. This is becoming more and more rare, but it's definitely still the case. Until the technology is proven to basically never need human intervention, it seems reasonable to require a person who can drive to be present.
How will this impact one of the key arguments for self-driving cars: elimination of DUIs?<p>If a driver is required behind the wheel able to take over at a moment's notice then the self-driving car's purpose is defeated - you cannot do work while the car drives, you can't use the car as a sober ride home from the bar, you can't take a nap on your commute, etc.<p>This is a legislated death sentence for self-driving cars as we envision them.
Even though it doesn't look like it, this is actually great news since this provides a path for Google et al to get out there and build broad customer comfort around the technology.<p>And frankly I don't ever foresee driverless cars that don't have some form of manual ovveride - there will always be situations that won't make sense to the computer that make perfect sense to me.<p>====
Edited to add this clarification, as people on HN seem overly ready to pick nits.
====<p>The larger point is that humans will be humans and will never buy a car that will interfere with their freedom to do what they think is right at that moment.<p>And if you want a better example - maybe I want to speed to get to the hospital in an emergency. Maybe I want to block another car that's leaving the scene of a crime. You get the idea.
Sounds like a good idea for the first 10 or so years of self-driving cars. Remember, not all self-driving cars will be made by Google or Tesla. Some will even be made by the spaghetti-code-Toyota, the cheating-Volkswagen, or the we-send-updates-over-unencrypted-HTTP-BMW. It's best not to jump head-first (literally?) with this.<p>And it's not just about the "safety" code of these self-driving cars, either. Many of them probably still won't care too much about hiring top-notch security experts in the first few years, to protect their cars against remote hacking (which could be much easier with self-driving cars if they are connected to the Internet).
If I have to be responsible for AI and mechanical errors, I don't want an AI driving my car.<p>Either I want to read a book or take a nap, completely zoned out, or I want to be alert and paying attention to the road.<p>Sitting there doing nothing bored and disengaged, not able to read a book, not actively engaged in the activity of driving, and also held responsible by the state for failures is the worst of both worlds.
Brilliant idea on the DMV's behalf. I'm reminded of Douglas Adam's thought, in Hitchhiker's Guide to the Galaxy, that solving a problem ("Now that it's perfectly air-conditioned,") shouldn't allow the removal of redundancy ("...you won't need to open the windows, ever."). Because the problem won't REALLY be solved in all cases, and if the driver finds he's in an ugly situation where he cannot act, there's no steering wheel to handle the convenient killing machine he's riding. Can you imagine the horror?
This would be a huge setback if approved simply because once approved, such things are unlikely to change even when the time comes that human drivers will not only be unnecessary, but the most dangerous component in all situations. I understand the state's motives: liability, fear of bad press, keeping traffic cops employed (and thus the biggest avenue for random arrests), etc. but none of them justify limiting the technology simply to serve their rather short-sighted, selfish needs. At the same time, the people that could benefit most from driverless cars like the elderly, disabled, youth, etc. are shit out of luck. And all this without a single accident yet attributed to driverless cars (afaik).
This is the exact wrong idea. Frankly, consumer-ready self-driving cars should be <i>prohibited</i> from changing into "manual mode" without a user request. And the <i>manufacturer</i> should be liable for an at-fault accident in auto-drive mode. Until these conditions can be satisfied, self-driving cars are simply not ready for prime time.<p>The proposed rule will lead to proliferation of shoddy "99%" software designed around an expectation that a human can take over at a moment's notice. But humans don't work that way, so any such exceptional condition will likely result in a crash. The only acceptable failure procedure for a self-driving car is to stop.
Does the requirement need to have a control unit at all times? It would be neat to continue to make it an option but also allow the console to hide the wheel and perhaps replace it with some more fly-by-wire smaller replacement.
I feel like it could be more unsafe this way. What if a human gets scared, takes control, and overreacts when the car is properly handling the situation? Might cause more accidents.<p>EDIT: reworded sentence so it actually made sense :P
When I read the geohot article [1] earlier I was thinking that there was no way what he was working on was legal. Would be curious to have someone with more expertise weigh in though.<p>Does this make comma.ai [1] or cruise [2] more acceptable in the eyes of the state?<p>1 - <a href="https://news.ycombinator.com/item?id=10744206" rel="nofollow">https://news.ycombinator.com/item?id=10744206</a>
2 - <a href="http://www.getcruise.com/" rel="nofollow">http://www.getcruise.com/</a>
Welp. As a blind person who has been looking forward to self-driving vehicles since the 2004 DARPA grand challenge, this is a not totally unexpected kick in the balls.
Most driving is fairly routine, but without strong AI I don't see how a self driving car can handle things like:<p>There is a car accident on the side of the road, and there is a human cop waving cars one at a time to drive past on the wrong side of the road.<p>I am part of a funeral procession, or a police escorted parade and I need to drive through red lights in order to stay in the procession. (That at least I know from the start to take manual control - except google wants a car with no wheel, and in a world with lots of such cars people won't know how to drive.)<p>Or, there is an ambulance behind me sirens wailing and in order to get out of the way I must drive illegally for a moment (through a red light, or on the wrong side of the road).<p>How about something as simple as driving the car up some ramps so I can service the car. How would you do that without both a steering wheel and experience in how to drive?<p>I don't see how a self driving car can handle any of those.