I was playing with this for a few weeks over the holiday break. This is one of the GS3D sample scenes running on PCVR at about 65 FPS. I'm sorting on the CPU at the moment, so there are some hitches, but it works! I may publish this as a Unity asset. (I'd love to get it working on Vision Pro, but we'll see.)
Chris' post doesn't really give much background info, so here's what's going on here and why it's awesome.<p>Real-time 3D rendering has historically been based on rasterisation of polygons. This has brought us a long way and has a lot of advantages, but making photorealistic scenes takes a lot of work from the artist. You can scan real objects with photogrammetry and then convert to high poly meshes, but photogrammetry rigs are pro-level tools, and the assets won't render at real time speeds. Unreal 5 introduced Nanite which is a very advanced LoD algorithm and that helps a lot, but again, we seem to be hitting the limits of what can be done with polygon based rendering.<p>3D Gaussian Splats is a new AI based technique that lets you render in real-time photorealistic 3D scenes that were captured with only a few photos taken using normal cameras. It replaces polygon based rendering with radiance fields.<p><a href="https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/" rel="nofollow">https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/</a><p>3DGS uses several advanced techniques:<p>1. A 3D point cloud is estimated by using "structure in motion" techniques.<p>2. The points are turned into "3D gaussians", which are sort of floating blobs of light where each one has a position, opacity and a covariance matrix defined using "spherical harmonics" (no me neither). They're ellipsoids so can be thought of as spheres that are stretched and rotated.<p>3. Rendering is done via a form of ray-tracing in which the 3D Gaussians are projected to the 2D screen (into "splats"), sorted so transparency works and then rasterized on the fly using custom shaders.<p>The neural network isn't actually used at rendering time, so GPUs can render the scene nice and fast.<p>In terms of what it can do the technique might be similar to Unreal's Nanite. Both are designed for static scenes. Whilst 3D Gaussians can be moved around on the fly, so the scene can be changed <i>in principle</i>, none of the existing animation, game engines or artwork packages know what to do without polygons. But this sort of thing could be used to rapidly create VR worlds based on only videos taken from different angles, which seems useful.
Very cool work! Are there papers or repos that do fast splat generation of digitally-originated assets?<p>I'm wondering if there is a way to embed digitally-originated assets in the scene and render them using the same splat drawing pipeline you're using to render your photographically-originated assets?