This approach is interesting in that it applies image-to-image diffusion modeling to autoregressively generate 3D consistent novel views, starting with even a single reference 2D image. Unlike some other approaches, a NeRF is not needed as an intermediate representation.
>> In order to maximize the reproducibility of our results, we provide code in JAX (Bradbury et al.,
2018) for our proposed X-UNet neural architecture from Section 2.3<p>Nice.<p>OpenAI shitting their pants even more.
This is one of the building blocks absolutely required for Full Self driving to ever work.<p>btw I like how it hallucinated bumper carrier mounted Spare Wheel based on the size of tires, heavy duty roof rack and bull bars while ground truth render was in a much less likely configuration of stock undercarriage frame hanger/no spare.
I'm entirely unfamiliar with this, but is there a future where we can take a few pictures of something physical, and have AI generate a 3d model that we can then modify and 3d print?<p>Asking as someone who's dreadfully slow at 3d modeling.