TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Sparse Voxels Rasterization: Real-Time High-Fidelity Radiance Field Rendering

132 点作者 jasondavies3 个月前

8 条评论

loxias3 个月前
I look forward to reading this in closer detail, but it looks like they solve an inverse problem to recover a ground truth set of voxels (from a large set of 2d images with known camera parameters), which is underconstrained. Neat to me that it works w&#x2F;o using dense optical flow to recover the structure -- I wouldn&#x27;t have thought that would converge.<p>Love this a whole heck of a lot more than NeRF, or any other &quot;lol lets just throw a huge network at it&quot; approach.
评论 #43135111 未加载
markisus3 个月前
This is basically Gaussian splat using cubes instead of Gaussians. The cube centers and sizes choices are discrete and non overlapping, hence the name “sparse voxel”. The qualitative results and rendering speeds are similar to Gaussian splat, and it’s sometimes better or worse depending on the scene.
HexDecOctBin3 个月前
Why is this called rendering, when it would be more accurate to call it reverse-rendering (unless &quot;rendering&quot; means any kind of transformation of visual-adjacent data)?
评论 #43137435 未加载
bondarchuk3 个月前
Funny, it almost sounds like a straight efficiency improvement of Plenoxels (the direct predecessor of gaussian splatting), which would mean gaussian splatting was something of a a red herring&#x2F;sidetrack. Though I&#x27;m not sure atm where the great performance gain is. Definitely interesting.
评论 #43136317 未加载
aaroninsf3 个月前
Can someone ELI5 what the <i>input</i> to these renders is?<p>I&#x27;m familiar with the premise of NeRF &quot;grab a bunch of relatively low resolution images by walking in a circle around a subject&#x2F;moving through a space&quot;, and then rendering novel view points,<p>but on the landing page here the videos are very impressive (though the volumetric fog in the classical building is entertaining as a corner case!),<p>but I have no idea what the <i>input</i> is.<p>I assume if you work in this domain it&#x27;s understood,<p>&quot;oh these are all standard comparitive output, source from &lt;thing&gt;, which if you must know are a series of N still images taken... &quot; or &quot;...excerpted image from consumer camera video while moving through the space&quot; and N is understood to be 1, or more likely, 10, or 100...<p>...but what I want to know is,<p>are these video- or still-image input;<p>and how much&#x2F;many?
评论 #43134640 未加载
评论 #43134401 未加载
magicalhippo3 个月前
Reminded me of the Radian Foam article posted here[1] not long ago, though the focus there was on being differentiable.<p>[1]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42931109">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42931109</a>
atilimcetin3 个月前
I think this paper is as important as original Gaussian Splatting paper.
评论 #43134768 未加载
davikr3 个月前
What is the usecase for radiance fields?
评论 #43134987 未加载