It's getting tiring seeing 3D model generation papers throwing around "high quality" to describe their output then glossing over nearly all of the qualities of a high quality 3D model in actual production contexts. Have they figured out how to produce usable topology yet? They don't talk about that, so probably not.<p>3D artists are begging for AI tools which automate specific tedious but necessary tasks like retopo and UV unwrapping, but tools like the OP do the opposite, skipping over those details to produce a poorly executed "final" result and leaving the user to reverse engineer the model in an attempt to salvage the mess it made.<p>If gen3D is going to be a thing then they need to listen to the people actually doing 3D work, not just chase benchmarks invented by other gen3D researchers. Some commentary on a similar paper about how they are trying to solve the wrong problems: <a href="https://x.com/rms80/status/1801362145600254211" rel="nofollow">https://x.com/rms80/status/1801362145600254211</a>
As someone who teaches 3D, a 'high quality' model would need to have clean topology: all quads which flow around the form in a predictable and rational manner. From this, I would expect a clean texture map. I am fairly certain that current technology is not up to this.<p>I have seen a few of these papers, and (from my limited experience) very rarely is the 3d model avauable for review.
Really good. This is just geometric analysis though. Geometric in the sense that the model likely doesn't understand what it's rendering. All it sees is some shape.<p>The next step is geometry with organized contours that make sense, meaning that the model needs to cohesively understand the picture and not just the geometry. For example, if a person in the picture is wearing armor the model generates two separate models overlayed on one another, the armor and the mesh.
Great to see these getting better and better. This might actually be usable for geometry generation if it's possible to increase the resolution. It seems that a simple super-resolution pass could help with this. For now, using this mesh as a reference model would help a lot in a typical 3D modeling process.<p>Those textures are completely useless, because they have all the light and view-dependency baked in. It's not really possible to extract a diffuse texture from this. There has been some work on generating material BRDFs [0], but I've not seen great results yet.<p>[0] for example, <a href="https://sheldontsui.github.io/projects/Matlaber" rel="nofollow">https://sheldontsui.github.io/projects/Matlaber</a>
The demo page has demo images but the results are not cached. While im probably not an interesting customer I got bored waiting. Not something worth spending cpu cycles on.