The tech stack in the splat world is still really young. For instance, I was thinking to myself: “Cool, MVSplat is pretty fast. Maybe I’ll use it to get some renderings of a field by my house.”<p>As far as I can tell, I will need to offer a bunch of photographs with camera pose data added — okay, fair enough, the splat architecture exists to generate splats.<p>Now, what’s the best way to get camera pose data from arbitrary outdoor photos? … Cue a long wrangle through multiple papers. Maybe, as of today… FAR? (<a href="https://crockwell.github.io/far/" rel="nofollow">https://crockwell.github.io/far/</a>). That claims up to 80% pose accuracy depending on source data.<p>I have no idea how MVSplat will deal with 80% accurate camera pose data… And I also don’t understand if I should use a pre-trained model from them or train my own or fine tune one of their models on my photos… This is sounding like a long project.<p>I don’t say this to complain, only to note where the edges are right now, and think about the commercialization gap. There are iPhone apps that will get (shitty) splats together for you right now, and there are higher end commercial projects like Skydio that will work with a drone to fill in a three dimensional representation of an object (or maybe some land, not sure about the outdoor support), but those are like multiple thousand-dollar per month subscriptions + hardware as far as I can tell.<p>Anyway, interesting. I expect that over the next few years we’ll have push button stacks based on ‘good enough’ open models, and those will iterate and go through cycles of being upsold / improved / etc. We are still a ways away from a trawl through an iPhone/gphoto library and a “hey, I made some environments for you!” Type of feature. But not infinitely far away.