I think there has been some serious misinterpretation of what 'real time' means in the context of this paper; and, possibly, that the researchers have avoided overt clickbait claims because they knew the term 'real time' would do the work for them.<p>This is not some neural codec that can convert any novel or unseen object live, like a kind of 3D YOLO - the paper mentions that it requires up to 24 hours of training on a per case basis.<p>Nothing can be edited, no textures or movements - all you can do is remove people or speed them up or slow them down, and that's been possible in NeRF for a few years now.