Cruise CEO here. Some relevant context follows.<p>Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments. This is low enough already that there isn’t a huge cost benefit to optimizing much further, especially given how useful it is to have humans review things in certain situations.<p>The stat quoted by nyt is how frequently the AVs initiate an RA session. Of those, many are resolved by the AV itself before the human even looks at things, since we often have the AV initiate proactively and before it is certain it will need help. Many sessions are quick confirmation requests (it is ok to proceed?) that are resolved in seconds. There are some that take longer and involve guiding the AV through tricky situations. Again, in aggregate this is 2-4% of time in driverless mode.<p>In terms of staffing, we are intentionally over staffed given our small fleet size in order to handle localized bursts of RA demand. With a larger fleet we expect to handle bursts with a smaller ratio of RA operators to AVs. Lastly, I believe the staffing numbers quoted by nyt include several other functions involved in operating fleets of AVs beyond remote assistance (people who clean, charge, maintain, etc.) which are also something that improve significantly with scale and over time.
Cruise is leveraging human-in-the-loop to expand faster than they otherwise would, with the hope that they will solve autonomy later to bring this down.<p>I don't think this is a viable strategy though given the enormous costs and challenges involved.<p>There doesn't exist a short-term timeline where Cruise makes money, and the window is rapidly closing. They needed to expand to show big revenues, even if they had to throw 1.5 bodies per car at the problem.<p>Prediction: GM will offload cruise, a buyer will replace leadership and layoff 40% of the company. The tech may live to see another day, but given the challenges that GM has generally (strikes, EVs, etc), they can no longer endlessly subsidize Cruise.
> Two months ago, Kyle Vogt, the chief executive of Cruise, choked up as he recounted how a driver had killed a 4-year-old girl in a stroller at a San Francisco intersection. “It barely made the news,” he said, pausing to collect himself. “Sorry. I get emotional.”<p>...<p>> Cruise’s board has hired the law firm Quinn Emanuel to investigate the company’s response to the incident, including its interactions with regulators, law enforcement and the media. / The board plans to evaluate the findings and any recommended changes. Exponent, a consulting firm that evaluates complex software systems, is conducting a separate review of the crash, said two people who attended a companywide meeting at Cruise on Monday.<p>After the first [edit: the first performative charade, about little girl in a stroller], why should we trust the second isn't also a performative charade? What independence or credibility does some hired law firm have, that the company itself does not? How about using an independent third party?
Having to be remotely operated every 2.5 to 5 miles seem to defeat most of the economics of self driving cars.<p>Back of the napkin math, cars drive at an average of 18mph in cities, so every 10-20min.
Let’s assume it takes over for 1min, and that you need remote drivers not too far for ping purposes, so at the same hourly rate. To guarantee you’ll be able to take over all demands immediately, due to the birthday paradox, you end up needing like 30 drivers for 100 vehicles? It’s not that incredible of a tech…
Look, isn't remotely assisted driving something unbelievably stupid?
Why should I rely, when I am on my "driverless" car, rely on someone else who is remote, need to be updated at all times about the situation (when things can go wrong in a matter of tenth of seconds, while driving), and needs to react, and it's not as much motivated as me (as I am risking my life, while he is sitting somewhere without having as much skin in the game as me)?
It makes a lot more sense, then, to have just an assisted driving car, or a semi-autonomous car where the "assistant" to the AI it's me and not someone else.
If humans need to remotely intervene for a car in motion, that implies it could impact safety.<p>If that's correct, then the remote signaling of a problem and the human's response and control must have flawless availability and low latency. How does Cruise achieve that?<p>Cellular isn't that reliable. Maybe I misunderstand something.
I've said this here before and I will repeat it:<p>An overwhelming majority of Americans will choose 45,000 deaths in car crashes annually (last year's number) in human-driven cars over 450 deaths/year with all self-driving cars.<p>In the American (and probably ALL) mind(s), human agency trumps all.
"That is a rare level of talent," said Sam Altman, head of the Y Combinator startup incubator. "I can see Kyle being the next CEO of GM."<p><a href="https://www.vox.com/2016/3/11/11586898/meet-kyle-vogt-the-robot-guru-who-just-sold-his-second-billion-dollar" rel="nofollow noreferrer">https://www.vox.com/2016/3/11/11586898/meet-kyle-vogt-the-ro...</a>
This was a plot point in Captain Laserhawk: All the self-driving cars and flying drones were actually being remotely piloted by prisoners in a massive VR facility.<p><a href="https://en.wikipedia.org/wiki/Captain_Laserhawk:_A_Blood_Dragon_Remix" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Captain_Laserhawk:_A_Blood_Dra...</a>
> <i>Company insiders are putting the blame for what went wrong on a tech industry culture — led by the 38-year-old Mr. Vogt — that put a priority on the speed of the program over safety. [...] He named Louise Zhang, vice president of safety, as the company’s interim chief safety officer [...]</i><p>I hope Chief Safety Officer isn't just a sacrificial lamb job, like CISO tends to be.<p>Is the "interim" part hinting at insufficient faith, and maybe future blame will be put on how the VP Safety performed previously (discovered after the non-interim person is hired)?<p>> <i>[...] and said she would report directly to him.</i><p>Is the CSO nominally responsible for safety?<p>Does the CSO have any leverage to push back when their recommendations aren't taken, other than resigning?
It sounds like there are a lot of people working at GM who don't like Cruise and are willing to complain to the NYT about it. One of those frustrating "we're a startup inside a large company" things.<p><i>Cruise employees worry that there is no easy way to fix the company’s problems.</i><p><i>Company insiders are putting the blame for what went wrong on a tech industry culture.</i><p>What, because car companies with car company culture are doing such a great job building self-driving cars?<p>I'm rooting for both Cruise and Waymo here. Self-driving cars would be great for humanity. Good luck to the teams working hard to make them happen.
Here we go again with a CEO who proclaims "autonomous cars are safer than human-driven cars." And their definition of "safer" conveniently ignores that autonomous cars <i>create new failure modes</i> which do not exist in manually-driven cars.<p>It may be true that statistically fewer fatalities per mile happen with autonomous cars than with human-driven cars. But that's irrelevant. If the car kills one person because it did something utterly stupid like driving under a semi crossing the highway or dragging a pedestrian along the ground, the public will not accept it.<p>This is another example of the uncanny valley problem: Most "smart" devices are merely dumb in new ways. If your "smart" gizmo is only smart in how it collects private information from people (e.g. smart TVs), or it's merely smarter than a toggle switch, that's not what the public considers smart. It has to be smarter than a reasonably competent human <i>along almost all dimensions</i>; otherwise you're just using "smart" as a euphemism for "idiot savant." Self-driving cars are a particularly difficult "smart" problem because lives are at stake, and the number of edge cases is astronomical.
> Having to be remotely operated every 2.5 to 5 miles<p>Regarding Cruises' suspension, how likely is it that the backup driver restarted the car to drive again after the car stopped with the pedestrian below?
This is the same whacky theory I've been spreading about Tesla self-driving for a year or so. "Imagine Tesla self-driving is like some dude driving your car via videogame on the other side of the world."<p>Most people are pretty sure my theory is wrong. I have absolutely no evidence this is true, it's just some crazy idea that popped into my head one day.
I get a feeling Cruise is going to get sold off within the next 5 yrs. Waymo will likely be the leading provider for “autonomous vehicle” software/hardware.<p>Government Motors can only sustain such a loss on their books for a short time. This is probably why Vogt has been pushing so hard for market dominance.
It's not an allegation. It's the same as using human feedback for tuning large language models. There are no autonomous cars currently regardless of what is written on the marketing brochures. In various "emergency" situations the cars phone home and ask a human operator to take over the controls.
I thought being a social media moderator and being constantly exposed to violence, racism, and child pornography was bad. Having your whole day being a series of "quick, don't let these people die!" moments seems like the worst tech job on earth.
The title isn't news at all as every single trustworthy autonomous driving solution MUST HAVE human operators somewhere to take over but the actual article is a good summary of Cruise's current situation and I'd guess the competition as well.
Can we just admit that this likely isn’t possible in our lifetime and put more money into early childhood education, better healthcare and geriatric care?