TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Autonomous Cars Simulation

12 pointsby coderunnerabout 6 years ago
There&#x27;s a couple companies in the business of using a 3d simulation for training autonomous cars like Waymo, Cruise, Nvidia, and Applied Intuition. I don&#x27;t quite understand their product though.<p>1. Are the trained object detectors in the simulation applied to real world data also or is only the part that makes decisions transferred to the real vehicle (e.g. it&#x27;s safe to turn left here) while detectors trained on real world images of cars, people, etc. used?<p>2. Tangentially, I thought that in general detectors trained on computer generated images was not very applicable to real world images. eg training on a bunch of images of 3d modeled humans won&#x27;t work well with testing on pictures of real humans. Is this not true?

3 comments

hacooabout 6 years ago
I work on full-3D simulations (using a game engine) for an autonomous car company. I can&#x27;t speak for every AV company, but in my experience, simulators are used far more for testing than &#x27;training&#x27;. The appeal of using a 3D game engine for simulation is that you can create inputs to the car&#x27;s perception system. Without this ability, you&#x27;re stuck either replaying recorded data, or spoofing out perception and only testing planning&#x2F;controls and down. These two approaches are actually extremely powerful, so the vast majority of AV simulation testing is not done in full 3D.<p>There are some situations where 3D simulation is useful, though. First, it allows you to run your AV software in its entirety (i.e., not spoofing perception), making for a very complete integration test. A 3D sim can capture complex, interesting occlusions that other sims cannot. Another fairly common use case is experimenting with new sensor setups before they&#x27;re added to the car.<p>As for training, it&#x27;s mostly research at this point. I think there&#x27;s promise in using synthetic data to supplement real-world data training data for perception systems.<p>There are a number of companies trying to market simulation &#x27;platforms&#x27; to AV makers. I think there&#x27;s the potential for one of these products to gain traction -- but it&#x27;s a difficult sell. AVs are enormously complicated, a 3rd party product would need to both beat in-house sims and support a lot of very specific (and likely propriety) AV features.
评论 #19710797 未加载
Datenstromabout 6 years ago
Simulation to real training is an active area of research and I&#x27;m not aware of it being used anywhere in production for critical systems. I have not seen anything better than the &quot;Learning Dexterity&quot;[1] paper that was published last year.<p>[1]: <a href="https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;learning-dexterity&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;learning-dexterity&#x2F;</a>
评论 #19703253 未加载
natchabout 6 years ago
Taking just the images subset of this, but it should apply for other types of data as well:<p>It doesn’t have to be computer-generated images. It can also be computer—altered images (think n° rotation, blurring, cropping, etc.) which should work pretty well in part because real world images are sometimes rotated, blurred, cropped, etc.
评论 #19703381 未加载