TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Generate Stable Diffusion scenes around 3D models

123 pointsby neilxmover 1 year ago
3D-to-photo is an open source tool for Generative AI product photography, that uses 3D models to allow fine camera angle control in generated images.<p>If you have 3D models created using the iOS 3D scanner you can upload them directly on to 3D-to-photo and describe the scene you want to create. For example:<p>&quot;on a city side walk&quot; &quot;near a lake, overlooking the water&quot;<p>Then click &quot;generate&quot; to get the final images.<p>The tech stack behind 3D-to-photo:<p>Handling 3d models on the web: @threejs Hosting the diffusion model: @replicate 3D scanning apps: shopify,Polycam3D or LumaLabsAI

11 comments

chrisnightover 1 year ago
Given that Stable Diffusion is designed to be able to run on consumer hardware, without the need for a third party cloud platform, it saddens me to see that this, alongside many other similar projects, require the use of a third party platform for hosting the model, even for local usage. The tool itself does seem interesting though.
评论 #37949246 未加载
chankstein38over 1 year ago
How is this different than just photoshopping a 2d image of a 3d object onto an SD generated background? Is it just meant to let people skip the step of generating a background and compositing? (Sorry, inpainting. But the distinction seems minimal here as people have been photoshopping 3d objects believably into scenes for decades before SD came around)
评论 #37947270 未加载
评论 #37947154 未加载
评论 #37953904 未加载
评论 #37947225 未加载
causiover 1 year ago
God we&#x27;re so close from being able to feed a photo and some measurements into a program and get an accurate model out of it. I can&#x27;t wait until my smartphone and a set of calipers can replace a $700 3d scanner.
评论 #37947151 未加载
评论 #37947169 未加载
scotty79over 1 year ago
Neat. I recently was having fun with doing something similar manually in Blender, generating depth map and using it in Stable Diffusion with controlnet. Results were great. My models didn&#x27;t have texture though, so sd generated it. But I imagine I could go with img2img to preserve texture if I had it.
mlbossover 1 year ago
Why does it need a 3d model ? It looks like it is just doing inpainting which can be done with a single image.
评论 #37947756 未加载
评论 #37950038 未加载
评论 #37949330 未加载
评论 #37948834 未加载
tinyteraover 1 year ago
Looks pretty cool. Can anyone comment on how to hack together the opposite? That is, going from 2D object image to 3D rendering with in-painted background? Or is that not possible right now.
评论 #37949698 未加载
评论 #37950842 未加载
评论 #37949542 未加载
yieldcrvover 1 year ago
I took a quick look at the python flask code and I’m still not sure if there’s a reason of not just using Next’s server side aspects. JS can do every operation I skimmed by<p>Thoughts?
评论 #37949228 未加载
spiderxxxxover 1 year ago
how is the lighting on the model? I assume you can&#x27;t do anything other than overcast days, because the lighting isn&#x27;t specified.
评论 #37947736 未加载
halyconWaysover 1 year ago
This sounds pretty cool, do you have a demo or maybe a webm to put in the README.md?
评论 #37946919 未加载
评论 #37946911 未加载
cvhashim04over 1 year ago
Wow this is insanely cool
评论 #37947615 未加载
yanmaover 1 year ago
Need gaussian splatting integrated asap <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;gaussian-splatting" rel="nofollow noreferrer">https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;gaussian-splatting</a>
评论 #37948584 未加载
评论 #37948715 未加载