I recently got into creating avatars for VR and have used AI to learn Unity/Blender so ridiculously fast, like just a couple weeks I've been at it now. All the major models can answer basically any question. I can paste in screenshots of what I'm working on and questions and it will tell me step by step what to do. I'll ask it what particular settings mean, there are so many settings in 3d programs; it'll explain them all and suggest defaults. You can literally give Gemini UV maps and it'll generate textures for you, or this for 3d models. It feels like the jump before/after stack overflow.<p>The game Myst is all about this magical writing script that allowed people to write entire worlds in books. That's where it feels like this is all going. Unity/Blender/Photoshop/etc.. is ripe for putting a LLM over the entire UI and exposing the APIs to it.
Nice model, but strange license. You are not allowed to use it in EU, UK, and South Korea.<p>“Territory” shall mean the worldwide territory, excluding the territory of the European Union, United Kingdom and South Korea.<p>You agree not to use Tencent Hunyuan 3D 2.0 or Model Derivatives:
1. Outside the Territory;
I tried it on my Radeon 7900 GRE 16GB on Windows 11 WSL Ubuntu 24.04 with torch 2.4.0 and rocm 6.3.4, from here, <a href="https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3.4/" rel="nofollow">https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3.4/</a>.<p>I am impressed, it runs very fast. Far faster than the non-turbo version. But the primary time is being spent on the texture generation and not on the model generation. As far as I can understand this speeds up the model generation and not the texture generation. But impressive nonetheless.<p>I also took a head shot of my kid and ran it through <a href="https://www.adobe.com/express/feature/ai/image/remove-background" rel="nofollow">https://www.adobe.com/express/feature/ai/image/remove-backgr...</a> and cropped the image and resized it to 1024x1024 and it spit out a 3d model with texture of my kid. There are still some small artifacts, but I am impressed. It works very well with the assets/example_images. Very usable.<p>Good work Hunyuan!
How are they extracting value here? Is this just space-race-4-turbo propagandising?<p>I see plenty of GitHub sites that are barely more than advertising, where some company tries to foss-wash their crapware, or tries to build a little text-colouring library that burrows into big projects as a sleeper dependency. But this isn't that.<p>What's the long game for these companies?
What's the best img2mesh model out there right now, regardless of processing requirements?<p>Are any of them better or worse with mesh cleanliness? Thinking in terms of 3d printing....
Running my usual img2mesh tests on this.<p>1. It does a pretty good job, definitely a steady improvement<p>2. The demos are quite generous versus my own testing, however this type of cherry-picking isn't unusual.<p>3. The mesh is reasonably clean. There are still some areas of total mayhem (but these are easy to fix in clary modelling software.)