NVIDIA's Isaac Gym project revealed GPU's capability of performing massively parallel simulation for gym-style environments. Detailed information can be found in the following paper:<p>[1] Makoviychuk, Viktor, et al. "Isaac Gym: High-Performance GPU Based Physics Simulation For Robot Learning." Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). 2021.<p>At its release, people commented on Twitter that "it is the MNIST moment for reinforcement learning." And over the past year, I saw several follow-up works and tested NVIDIA's implementations.<p>For example, a demo by this blog<p><a href="https://towardsdatascience.com/a-new-era-of-massively-parallel-simulation-a-practical-tutorial-using-elegantrl-5ebc483c3385" rel="nofollow">https://towardsdatascience.com/a-new-era-of-massively-parall...</a><p>The question is, does that technique help advance Reinforcement Learning, as expected?