Tesla has stated publicly they believe that within three years their cars will be capable of full autonomy, and expect it to take another two years to receive regulatory approval. By fully autonomous they state you will be able to walk outside, have the car approach to meet you, it will open and close the door for you, you can fall asleep, and several hours later wake up at your destination.<p>Yes, Tesla is taking an incremental approach to releasing the feature sets that are required to have a fully autonomous vehicle, but no, the end product goals for Tesla and Google are not different in kind.<p>What certainly is different is the manufacturing approach the two companies are taking. Google is seemingly aiming to release a fully autonomous vehicle at version 1.0, meaning every system of the car, such as manufacturing process, sales, customer support, will be at version 1.0 at the same time. In contrast, when Tesla releases its version 1.0 of the fully autonomous driving feature set, they will already have very matured versions of the other components, such as their manufacturing process, battery and drive train technology, sales and marketing, customer support, etc.<p>Plus, the Google cars look like something one buys for their four year old niece or nephew.
>Doing many thousands times better will not be done by incremental improvement<p>This assertion isn't obvious to me. In my experience incremental updates are often exponential in their impact (especially if enough resources are put into a problem). Moore's Law is an excellent example of this: at any given time, researchers are working on a fixed number of solutions that will generally make a fixed % impact. This is why we can see a doubling in transistor density without a huge increase in the size of the industry.<p>In the case of reducing accidents, I could see a similar exponential pattern. The first incremental step maybe took the accident rate from 10% to 1% by eliminating 90% of the possible sources of accidents. In the second step, researchers will again shoot to eliminate 90% of the current causes of accidents, bringing the rate to 0.1%. This could repeat every couple years until the accident rate is sufficiently close to 0.
The Tesla "autopilot" is comparable to what BMW[1] and Mercedes[2] have been shipping for years.<p>Self-driving is much harder. The first-order problems of driving on a empty road were solved by the DARPA Grand Challenge, ten years ago. The second-order problems involve dealing with other road users. That's hard, and that's what Google is working on, with considerable success. So is the CMU/Cadillac consortium, which has demonstrated their self driving car to politicians in Washington traffic.[3] Nobody seems to pay much attention to that effort, although they may be closer to a production product than anyone else. (Or not; Uber hired some of the people involved away from CMU.)<p>Self-driving cars need and have a lot more sensors than semi-auto cars. There's a lot more sensing to the sides and rear, and more forward sensing than just being able to detect the next obstacle ahead. Vision processing is far more elaborate. Google's vision system explicitly recognizes humans and bicycles.<p>Google's little 25MPH driverless car is a way for them to enter the market. At 25MPH, slamming on the brakes is a good solution to situations the system can't handle. Those things are going to be all over senior communities in a few years. Google already has higher-performance cars on the road; they can be seen all over Mountain View most days.<p>[1] <a href="http://www.bmw.com/com/en/insights/technology/connecteddrive/2013/driver_assistance/intelligent_driving.html" rel="nofollow">http://www.bmw.com/com/en/insights/technology/connecteddrive...</a>
[2] <a href="http://www.mercedes-benz-intelligent-drive.com/com/en/1_driver-assistance-and-safety/7_active-lane-keeping-assist" rel="nofollow">http://www.mercedes-benz-intelligent-drive.com/com/en/1_driv...</a>
[3] <a href="https://www.washingtonpost.com/local/trafficandcommuting/driverless-vehicles-even-in-dc-streets-an-autonomous-car-takes-a-capitol-test-run/2014/08/25/6d26baa8-06a4-11e4-8a6a-19355c7e870a_story.html" rel="nofollow">https://www.washingtonpost.com/local/trafficandcommuting/dri...</a>
> A full robocar product is only workable if you would need to correct it in decades or even lifetimes of driving.<p>I had a conversation about this with friends in Germany a few months back.<p>In most societies, a mistake that causes suffering to another individual is usually 'blamed' on the person causing the suffering. In many cases where causality is obvious, this assignment of blame is fairly straightforward. Example: Bob fell asleep, which caused him to lose control of his car, which hit the bus, which killed a child. Bob is now culpable for the child's family suffering. Bob remains one of many others who share culpability at this point, assuming others are also falling asleep at the wheel. FWIW, 103M people fell asleep at the wheel last year in the US, so Bob will likely have company.<p>Now put an autonomous piece of software written by company X into Bob's car. Bob engages the autopilot, falls asleep, the autopilot software experiences an error, the software fails to alert Bob, the software loses control of the car, which hits a bus, which kills a child. Who is culpable for the family's suffering now? The software? Company X?<p>The only way for company X to both a) allow Bob to fall asleep and b) bear the culpability for a family's suffering is to get the software to the point it only makes mistakes in a timeframe that is, at a minimum, several orders of magnitude greater than Bob making the same mistake.<p>The logic goes that, once a company's software kills a child, it's going to be pretty hard to keep the public from reacting negatively, even though overall suffering will decrease. The only option company X is to require Bob to accept he is "driving" the car and bear the culpability of any suffering the car's software may cause, or alternately, be ready to pay a substantial settlement that offsets suffering.
I've always wondered how autonomous cars would handle the first and last quarter mile of the journey. I'm talking specifically like portions of the trip like the driveway, getting out of the parking ramp, or navigating small alleyways where the car could be parked (where GPS could be weak in the city). Things like even knowing which entrance to go to. Will fully autonomous will ever be able to take us from A to B 100%? Will humans always take over the last tiny bit where the maps aren't detailed and to park? Humans love to drive around the lot to park at exactly the "perfect" spot. Cars can parallel park now, but how will cars decide where to park exactly? Will we ever be able to have the car take us through the drive-thru?<p>I personally think autopilot-like auto-cruise just on the highway and more established local roads would be good enough. The convenience afforded by having the robot take us from A to B parked to parked may not be worth the insane price it must have on its tag to get there.
If Google's 25mph car is able to slam on the brakes and avoid an incident without swerving or taking other action then it could be argued that once we move to a fully autonomous society, 25mph (example; may not be accurate but for the sake of debate, 25mph is what I'll stick with for this scenario) may end up being the max speed for safety reasons.<p>This isn't to say that now trips will take longer and that will adversely impact our lives because I think what will end up happening is we will rearrange our lives so that we use these longer driving trips to sleep or work, converse with friends, do homework on the way to class, etc. and thus the time it takes to get from point A to point B becomes moot as we are now able to be orders of magnitude more productive in our vehicles.<p>Granted, this will not only reduce accidents as now the vehicles can communicate with each other and will instead know what the intentions of the other car are and adjust accordingly instead of trying to anticipate what the other car is going to do, but it will also reduce or eliminate speeding tickets and DUIs. Due to speeding tickets and DUIs being a large source of revenue for municipalities, I'd expect this to evolve as well, unfortunately.
The thing the author is missing is that Google <i>can't</i> incrementally improve, since they're not in the car business. Tesla, on the other hand, has the option of either incrementally introducing autonomy to their cars or taking the Google approach of shipping a 1.0 in a big release years down the road. That they've clearly chosen the former is telling.<p>The author pretends that both companies have a choice and have chosen different strategies, but it's clearly not the case. Unless Google was planning on building a traditional car business first (a fairly ridiculous proposition), or partnering and integrating with the supply chain of a major manufacturer (a stretch, if just to introduce fancy cruise control), they were never going to be able to iterate towards a robocar.
"Tesla’s autopilot isn’t even particularly new."<p>Guess what... Apple didn't sell the first smartphone either.<p>Someone takes a small step into car driving automation, tries to create some buzz around it, then I've got to read about how it's not a big deal. The nuances between autonomous and auto-pilot need to be discussed. We need a taxonomy.<p>I guess writing these sorts of articles is a million times easier than adding any autonomous features to any vehicle.<p>Forward progress is extremely important. It comes technically and socially. Let's hope everyone demands a car with "that stuff" they have in a Tesla. We'll get a little arms race that'll pay for further development, lives will be save (in total), and we'll asymptotically approach the vision.
"This is not a difference of degree, it is a difference of kind. It is why there is probably not an evolutionary path from the cruise/autopilot systems based on existing ADAS technologies to a real robocar."<p>Really interesting. I did not realize that.
Another interesting article about Tesla's Autopilot is this:
<a href="http://electrek.co/2015/10/30/the-autopilot-is-learning-fast-model-s-owners-are-already-reporting-that-teslas-autopilot-is-self-improving/" rel="nofollow">http://electrek.co/2015/10/30/the-autopilot-is-learning-fast...</a><p>It's learning. That is an interesting approach. I wonder how far they get by that.<p>I guess Google's car will also collect data and help Google to improve the performance. My impression so far was that it's mostly engineered work however, and not so much learned (in a machine learning way).
Another way to improve the accident rate is the other side of the robocar argument wherein we, as humans, do a better job of driving and of watching/educating our kids.<p>I understand there are rules of the road and rights-of-way but a right-of-way for a pedestrian in a crosswalk with the walking signal is not going to stop a bus from running the light and killing the pedestrian.<p>Not that I'm blaming the pedestrian but it surely doesn't hurt to think defensively, look both ways and judge if that bus is going to be able to stop and act accordingly.
I'm always wondering how Google's car will certify when new releases are made. Consider that an autonomous car will need X hours/miles of driving before it will be certified. Now, if Google updates 1 line of code, the whole certification process has to start all over.
A hundred thousand times better is only seventeen doublings.<p>Which approach has the fastest exponential growth curve? The one with thousands of cars on the road, learning from each other, or the one with a few but more capable cars? We'll see. Just remember to think exponential not linear.
The point everyone is missing is the reaction time of a driver compared to a computer is so different. A computer have millions of cpu cycles to estimate the best decision during the time a human haven't even understood there will be an accident.
Google's car is an attempt at full autonomy.<p>Tesla's Autopilot (mostly) keeps you within the lines and regulates your speed to match the car in front of you.<p>Does this really need a full article?
You people crack me up. Self driving car was never anything other than an elaborate PR ploy for Google, the company which derives the vast majority of its income from its advertising business. Who would you rather work for: a company that is "building a self driving car" or a company that tracks the living shit out of everything you do on the web and ruins the web with mostly irrelevant ads? That's what I thought.<p>And they want these PR gravy trains running as long as humanly possible, so launching a real product isn't even a goal.