TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

RenderFormer: Neural rendering of triangle meshes with global illumination

275 点作者 klavinski7 天前

17 条评论

timhigins7 天前
The coolest thing here might be the speed: for a given scene RenderFormer takes 0.0760 seconds while Blender Cycles takes 3.97 seconds (or 12.05 secs at a higher setting), while retaining a 0.9526 Structural Similarity Index Measure (0-1 where 1 is an identical image). See tables 2 and 1 in the paper.<p>This could possibly enable higher quality instant render previews for 3D designers in web or native apps using on-device transformer models.<p>Note the timings above were on an A100 with an unoptimized PyTorch version of the model. Obviously the average user&#x27;s GPU is much less powerful, and for 3D designers it might be still powerful enough to see significant speedups over traditional rendering. Or for a web-based system it could even connect to A100s on the backend and stream the images to the browser.<p>Limitations are that it&#x27;s not fully accurate especially as scene complexity scales, e.g. with shadows of complex shapes (plus I imagine particles or strands), so the final renders will probably still be done traditionally to avoid any of the nasty visual artifacts common in many AI-generated images&#x2F;videos today. But who knows, it might be &quot;good enough&quot; and bring enough of a speed increase to justify use by big animation studios who need to render full movie-length previews to use for music, story review, etc etc.
评论 #44149463 未加载
评论 #44149164 未加载
评论 #44149775 未加载
评论 #44149245 未加载
评论 #44149316 未加载
评论 #44149129 未加载
评论 #44155553 未加载
mixedbit7 天前
Deep learning is also very successfully used for denoising of global illumination rendered images [1]. In this approach, traditional raytracing algorithm quickly computes rough global illumination of the scene, and neural network is used to remove noise from the output. .<p>[1] <a href="https:&#x2F;&#x2F;www.openimagedenoise.org" rel="nofollow">https:&#x2F;&#x2F;www.openimagedenoise.org</a>
评论 #44149046 未加载
CyberDildonics7 天前
With every graphics paper it&#x27;s important to think about what you don&#x27;t see. Here there are barely any polygons, low resolution, no textures, no motion blur, no depth of field and there are some artifacts in the animation.<p>It&#x27;s interesting research but to put it in perspective this is using modern GPUs to make images that look like what was being done with 1&#x2F;1,000,000 the computation 30 years ago.
评论 #44151711 未加载
notnullorvoid7 天前
I found it odd that none of the examples showed anything behind the camera. I&#x27;m not sure if that&#x27;s a limitation of the approach or an oversight in creating examples. What I do know is that when we&#x27;re talking about reflections and lighting what&#x27;s behind the camera is pretty important.
dclowd99017 天前
Forgive my ignorance: are these scenes rendered based on how a scene is expected to be rendered? If so, why would we use this over more direct methods (since I assume this is not faster than direct methods)?
评论 #44149279 未加载
评论 #44149291 未加载
评论 #44148896 未加载
rossant7 天前
Wow. The loop is closed with GPUs then. Rendering to compute to rendering.
kookamamie7 天前
Looks ok, albeit blurry. Would have been nice to see comparison of render-time between the neural and classical renderers.
评论 #44149051 未加载
评论 #44148860 未加载
coalteddy7 天前
I have a friend that works on physically based renderers in the film industry and has also done research in the area. Always love hearing stories and explanations about how things get done in this industry.<p>What companies are hiring such talent at the moment? Have the AI companies also been hiring rendering engineers for creating training environments?<p>If you are looking to hire an experienced research and industry rendering engineer i am happy to connect you since my friend is not on social media but has been putting out feelers.
评论 #44154081 未加载
K0nserv7 天前
Very cool research! I really like these applications of transformers to domains other than text. It seems it would work well with any domains where the input is sequential and those input tokens relate to each other. I&#x27;m looking forward to more research in this space.<p>HN what do you think are interesting non-text domains where transformers would be well suited?
vessenes7 天前
This is a stellar and interesting idea: train a transformer to turn a scene description set of triangles into a 2d array of pixels, which happens to look like the pixels a global illumination renderer would output from the same scene.<p>That this works at all shouldn’t be shocking after the last five years of research, but I still find it pretty profound. That transformer architecture sure is versatile.<p>Anyway, crazy fast, close to blender’s rendering output, what looks like a 1B parameter model? Not sure if it’s fp16 or 32, but it’s a 2GB file, what’s not to like? I’d like to see some more ‘realistic’ scenes demoed, but hey, I can download this and run it on my Mac to try it whenever I like.
评论 #44149451 未加载
hualaka6 天前
How efficient is neural rendering at this stage for game rendering?
keyle7 天前
Raytracing, The Matrix edition. Feels like an odd round about we&#x27;re in.
jmpeax7 天前
Cross-attention before self attention is that better?
_vicky_7 天前
Hey. In the renderframe intro animation gif , is the surface area of objects same between the three d construction and the two d construction?
goatmanbah7 天前
What can&#x27;t transformers do?
评论 #44149091 未加载
nicklo7 天前
The bitter lesson strikes again… now for graphics rendering. Nerfs had a ray tracing prior, and Gaussian splats had some raster prior. This just… throws it all away. No priors, no domain knowledge, just data and attention. This is the way.
feverzsj7 天前
Kinda pointless, when classic algorithms can achieve much better results on much cheaper hardware.
评论 #44148824 未加载
评论 #44148999 未加载