"A CYBERPUNK NINJA RIDING AN OSTRICH THROUGH THE STREET OF TOKYO"
<a href="https://holovolo.tv/v/962583" rel="nofollow">https://holovolo.tv/v/962583</a><p>Today Lifecast unveils text-to-full 3D immersive environments that can be viewed in VR (e.g., Quest 2) or on 2D screens. We are doing this with a combination of Stable Diffusion and several other neural nets to make it 3D, combined with Lifecast's format for 6DOF VR photos and video. It's free to try and we do the processing in the cloud. Check it out and tell us what you think! This is version 1.0 and we are iterating quickly, so expect improvements in the future.
This looks... pretty terrible. The images being generated are fine, but the conversion from 2D to 3D is awful. It looks like something poorly lasso-tool'd around the subject, put it on another layer closer to the viewer, and then very poorly interpolated the space that's visible between the two layers when you look at it from an angle.<p>Am I missing something? I feel like I've seen much better automatic 2D->3D conversions via layering long before this.
Im guess im not able to view the effect on desktop? Is it some kind of depth segmentation of the generated images rather than actual 3d? Maybe I need to view in a VR headset?
It's weird how such an obvious marketing plug with subpar results of 3D projection of 2D images got any traction here.
I was baited into clicking by seeing 3D/WebVR and was expecting 3D shapes like the recent advancements, and saw.. well that
Looks great. Given all the progress around this on the open source side I'm hoping that soon we'll be able to run something like this at home.