When I think of turning sequences of images into gaussians, I think of the difficulty of getting generalizable information that can be re-rendered out of the pipeline; textures and lighting, basically. From the description at the top of the paper, where they mention adding dimensions for things like albedo, I got excited.<p>But the demos don't do any re-rendering / change of lighting / etc, so I can't tell if this paper is just a 'super high render quality at same training time' paper, which is of course great to have, or if it has a shot at being extended to get us scenes that can be adjusted as to lighting and texture in-engine.<p>Any experts care to chime in?
This problem specifically (3D reconstruction with representation fitting) is really an overfitting nightmare, they just adapted to it not really overcame it. Nonetheless interesting work.