Interesting. Watch at 1/3 speed or so to see it in real time. (Self-driving car videos tend to be published sped up, so you don't see the mistakes.)<p>The key part of this is, how well does it box everything in the environment? That's the first level of data reduction and the one that determines whether the vehicle hits things. It's doing OK. It's not perfect; it often misses short objects, such as dogs, backpacks on the sidewalk, and once a small child in a group about to cross a street. Fireplugs seem to be misclassified as people frequently. Fixed obstacles are represented as many rectangular blocks, which is fine, and it doesn't seem to be missing important ones. No potholes seen; not clear how well it profiles the pavement. This part of the system is mostly LIDAR and geometry, with a bit of classifier. Again, this is the part of the system essential to not hitting stuff.<p>This is a reasonable approach. Looks like Google's video from 2017. It's way better than the "dump the video into a neural net and get out steering commands" approach, or the "lane following plus anti-rear-ending, and pretend it's self driving" approach, or the 2D view plane boxing seen from some of the early systems.<p>Predicting what other road users are going to do is the next step. Once you have the world boxed, you're working with a manageable amount of data. A lot of what happens is still determined by geometry. Can a bike fit in that space? Can the car that's backing up get into the parking space without being obstructed by our vehicle? Those are geometry questions.<p>Only after that does guessing about human intent really become an issue.
It really <i>really</i> bothers me that these folks are using a live city with real, non-volunteer test subjects of all ages (little kids and old folks use public streets) as a test bed for their massive car-shaped robots.<p>It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.<p>I'm one of those people who say, "Self-driving cars can't happen soon enough." But I don't think that justifies e.g. killing Elaine Herzberg.<p>Ask yourself this, why start with <i>cars?</i> Why not make a self-driving golf cart? Make it out of nerf (soft foam) and program it to never go so fast that it can't brake in time to prevent collision.<p>Testing these heavy, fast, buggy robots in crowds of people is extremely irresponsible.
What is the general view on Zoox's progress relative to other non-waymo playes. Such as Argo, Aurora and Cruise. There is the widely reported disengagement per mile, but most robotics people know it is just smoke and mirrors meant to make the regulators go away (disclosure, studied/researched robotics in grad school).
Yawn. Good lane markings, no rain/snow or other bad weather, perfect road surfaces.<p>Just like all other self-driving demos. I'd like to see a demo like this on snow covered roads, with no lane markings visible. I think that would tell a lot more about the system's ability to deal with an imperfect world.
All in all I'm quite impressed with the demonstration. It was way more thorough than previous videos I've seen. The main things the car is failing at from what I see are the hard things: Object permanence and ad-hoc reasoning. So no surprises.<p>Regarding object permanence: I was impressed overall with their detection. Still, you could see kids walking close to parents blink in and out of awareness of the car. Now I'm not saying humans are very good at tracking a multitude of actors. So at some point the machines will be "good enough". But that point seems way off when significant objects like kids can just disappear from awareness when they pass behind a stroller.<p>And about the ad-hoc reasoning: They have the whole city mapped out! Including traffic lights and turn restrictions. I'm not even clear whether they try to detect the signs at all. I'd assume that they have an operations center that hot-patches the map with everything cropping up during the day. So the cars would send in unexpected changes to the road and they would classify those changes and patch the map. Meaning the car is tethered to that feed and not autonomous in the strictest sense. Sure, such a center would be marginal cost given a large enough fleet. Still it's a subscription you'd need for your own robocar.<p>They mention a lot of things they are prepared for. And I can't help but think "oh they're really good" when they say "detect backed up lanes" or "creep into intersections". But that always leaves the question what happens when they're not prepared for something. When the rules don't fit. Can the car go over a curb if the situation warrants it? Does it back out of a blocked off section? Is it even able to weigh whether backing out is an option at this point?<p>so I'd like to see a "what we're currently stuck at" video. But I understand one can't very well attract investors with such a video.
> Handling yellow lights properly, involves us having to predict how long they will remain yellow for<p>No. That isn't how yellow lights work in the US. If the light turns yellow and you have enough space/time to make a safe stop you do it. There's no need to predict the remaining time on yellow phase. We don't need robot cars bending these rules.
This is really cool, but the environment is also really simple and I think we're definetly at least 15+ years out before self driving cars can handle somewhat challenging situations as well as humans.<p>Just try to put one of this vehicles in a situation with varying road width, no markings, snow with no sticks to mark the edges so you really have to pay attention to where the road actually is. What would this do if you meet a car on such a road? Try to figure out who should go back, and maybe go back to the latest plase where its wide enough? Do random tests to check for grip every now and then? It also needs to know whether the road is salted, understand if the salt is working and so on and on and on...
This is super cool! I'm wondering how the car would react if:<p>* someone parked on the side opens their door too quickly and collide with the zoox car.<p>* there is a car not moving in front, and the zoox car cannot see what's in the other lane without backing to get more insight<p>I'm also super impressed at how it can understand where the lane is in this 5 lane intersection that crosses a tram line. Even I couldn't understand where I would have had to drive!
The two turns (one left and one right-on-red) leading up to getting to Market Street in the latter half of the video struck me as odd; the left turn looked like a bit of a lane sweep, and the right-on-red looked dubious (is it legal to turn right on red if you're not in the far-right lane?).<p>SF intersections are hard, though, and the computer seemed to handle them about as well as I would've.
I really appreciate the calm background music.<p>I think background music is important. Especially on such long explanatory videos but often it becomes a reason for me to turn off a video if the music becomes to aggressive.
Besides the sheer complexity of situations described in this video, I wonder how these vehicles will deal with differences in traffic rules in different countries (when even road signs can be different).
This demo is not informative as to the readiness for scalable L4 deployment, for which it would be necessary to focus on the breadth/accuracy of perception features under the hood of intent prediction and what happens at the tail end with arbitrary situations that occur in urban driving environments.
Cheap criticism: the video starts with (I paraphrase) "This is 1 hour of driving", the last thing I expected after the fade-out/in was to see a man with a weird shirt... and then I notice the video is about 27 minutes long.<p>Edit to add: After that I started watching it, it's actually a video of an impressive AI.