It seems a little weird to focus on this. The article itself admits it may be standard practice for makers of self-driving tech to disable any built-in automated driving features in the car. This makes sense: presumably there's no good way to talk to the built-in system; having a second system running that can't coordinate with the self-driving system would likely make the combined system <i>less</i> safe. Even if communication between the two were possible, I'd still imagine that just having a single system operating the care would be much more desirable.
This seems like an obvious thing to do: if you’re trying to test the AI, remove complicating, possibly contradictory OEM systems.<p>But compare with the other discussion currently on the front page, where users point out system subsumption is a basic safety principle. You need a safety system to fall back on if the AI fails (as it will, at least at this stage). It seems grossly negligent if Uber didn’t re-implement collision avoidance at a lower level.<p>Edit: link to discussion I’m thinking of <a href="https://news.ycombinator.com/item?id=16681611" rel="nofollow">https://news.ycombinator.com/item?id=16681611</a>
I was wondering why it is necessary for self driving tech to be tested in "live" mode, instead of having the software passively log all event data. Then analyze it to see what the software "would have done" compared to a human driver.<p>Then a lot more testing data could be gathered by outfitting random vehicles (taxis, etc) with the tech, and analyze (and further refine the software) around every event where the software and human driver differed in opinion (i.e., whenever the vehicle abruptly changes speed, did the software detect that it should have hit the brakes at / before the same time).
Volvo is also manufacturing a car with a pedestrian airbag, the V40: <a href="https://support.volvocars.com/uk/cars/Pages/owners-manual.aspx?mc=Y555&my=2015&sw=14w46&article=7fceb4e7544b4fbbc0a801e800b0ef6b#" rel="nofollow">https://support.volvocars.com/uk/cars/Pages/owners-manual.as...</a><p>Both Volvo and Uber could have opted to use this car instead. Even choosing a SUV is question-worthy, as they're known to be more dangerous for pedestrians due to higher bumper heights: <a href="https://en.wikipedia.org/wiki/Criticism_of_sport_utility_vehicles#Risk_to_other_road_users" rel="nofollow">https://en.wikipedia.org/wiki/Criticism_of_sport_utility_veh...</a><p>IMO there is at least some negligence on their part for not choosing a car that is more likely to protect pedestrians.
There you go. Was seeing the videos of other drivers passing same road. Even the Google XL is better quality than Uber cam.
Also, all the lidar gimmick looks like not worked well there.<p>Like the Intel statement from article.
Hope there will be some sort of requirements for this equipment defined. Or technically benchmarked before allowed on road.<p>>
Intel Corp.’s Mobileye, which makes chips and sensors used in collision-avoidance systems and is a supplier to Aptiv, said Monday that it tested its own software after the crash by playing a video of the Uber incident on a television monitor. Mobileye said it was able to detect Herzberg one second before impact in its internal tests, despite the poor second-hand quality of the video relative to a direct connection to cameras equipped to the car.
That's quite opposite to how autonomous systems should be built in the first place: start with isolated modules that do <i>not</i> depend on each other, and stack them on top of each other as the abstraction level goes up.<p>If you have an automatic but a brutal, on-off style emergency-braking module based on, for example, lidar then it's much easier to develop another, maybe less aggressive but much smarter, auto-braking module based on the regular camera inputs. You can count on the failsafe system to act if higher-level modules fail. With braking in particular, maybe even a couple of redundant low-level emergency brake systems would be a good idea before you even consider any of the more high-level systems. You really don't want to hit jaywalking pedestrians or wandering drunks because both kinds do exist in reality.<p>Similarly, once the car has a couple of obstacle-detection and auto-braking systems up and running, it's easier to work on autonomous driving because you can be confident that your computer won't be able to drive the car into anything as these isolated and separate systems will take care to stop the car.<p>Consequently, when you're developing the navigation and routing system it has to be able to rely on the driving functions that deals with the present-time traffic situations, like keeping up with the flow of traffic, slowing down, changing lanes, etc. Again, same thing as with emergency brakes: to develop a smarter higher-level system you need to have lower-level failsafes in place.<p>In the case of autonomous driving a failure should mean a harsh but unnecessary full stop or the inability to continue, not a collision or the inability to stop. They should definitely have left Volvo's own system on as an additional failsafe mechanism.
With friends like this, why would Uber need enemies? We now have public statements from both Velodyne (makes the LiDAR that Uber uses) and Aptiv (makes the hardware used in the Volvos that Uber uses) saying “it wasn’t us”.<p>Volvo is the gentleman here, with its “can’t speculate on the cause of the incident” statement.
Maybe disabling Volvo's built in autonomous braking capability is needed for operating an autonomous driving system. It seems intuitive that multiple autonomous systems could clash.<p>But it also seems like "It works better than the stock production autonomous braking" would be a gating factor in putting these vehicles on the road.