TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

4K4D: Real-Time 4D View Synthesis at 4K Resolution

405 点作者 georgehill超过 1 年前

29 条评论

cchance超过 1 年前
Holy sh*t, can you imagine a year from now if they start using something like this for concerts or basketball games? Like imagine rewatching a basketball game but being able to move the camera on the court???? Might not be possible yet but this shows the techs possible. Let alone someday being able to scale it to realtime someday maybe lol
评论 #37927515 未加载
评论 #37924905 未加载
评论 #37924969 未加载
评论 #37955164 未加载
评论 #37925194 未加载
评论 #37935543 未加载
评论 #37926120 未加载
评论 #37929156 未加载
评论 #37925013 未加载
bloopernova超过 1 年前
It will be very interesting to watch how tech like this affects mainstream society.<p>I imagine pornography will use it at some point soon. Maybe something like chaturbate where your interactions with the cam performer are more customized?<p>Could it be used with CCTV to reconstruct crime scenes or accidents?<p>Wedding videos might be a popular use, being able to watch from new angles could be a killer app.<p>Or a reworking of the first Avengers movie, view all the action from multiple viewpoints.<p>And all this will probably be built in to the pixel 18 pro or something.
评论 #37928403 未加载
评论 #37934704 未加载
accrual超过 1 年前
This seems unprecedented. Imagine if you have this but you can update the scene programmatically. Ask your AI to change the location or actors. Now you have a very convincing artificial scene with anything you can imagine in it.
评论 #37923965 未加载
评论 #37924833 未加载
评论 #37925328 未加载
评论 #37924974 未加载
calibas超过 1 年前
&gt; we precompute the physical properties on the point clouds for real-time rendering. Although large in size (30 GiB for <i>0013_01</i>), these precom- puted caches only reside in the main memory and are not explicitly stored on disk,<p>Does the cache size scale linearly with the length of the video? <i>0013_01</i> is only 150 frames. And how long does the cache take to generate?
评论 #37924975 未加载
pineconewarrior超过 1 年前
Incredible!<p>How many cameras does this method require? As far as I can tell from the paper it still generates from multi-view source data. I can&#x27;t say for sure but it seems like a large number from what I can parse as a layman.
评论 #37924025 未加载
sandworm101超过 1 年前
Very cool renderings, but ironically my browser is having a heck of a time rendering their website. The short videos keep jumping around, starting and stopping randomly... which i guess is very VR.
评论 #37927198 未加载
评论 #37927206 未加载
tomalaci超过 1 年前
Add volumetric sound, integrate VR and you almost have recreated braindance from the Cyberpunk 2077 game. Doesn&#x27;t seem that far off in the distance.<p>The missing component from complete braindance would be integrating physical senses. AFAIK we are pretty far away from having anything revolutionary in that domain. Would love to be proven wrong, however.
评论 #37930936 未加载
pard68超过 1 年前
This seems neat but I don&#x27;t understand the use of 4D. It&#x27;s not four dimensional. It&#x27;s 3D with the ability to have an arbitrary perspective.
评论 #37924308 未加载
评论 #37926241 未加载
评论 #37923884 未加载
评论 #37925598 未加载
评论 #37923873 未加载
评论 #37923872 未加载
评论 #37923978 未加载
JansjoFromIkea超过 1 年前
Related: there was a small project that done similar stuff with Kinect v2 a ~7 years ago that was really impressive for the time. <a href="https:&#x2F;&#x2F;github.com&#x2F;MarekKowalski&#x2F;LiveScan3D">https:&#x2F;&#x2F;github.com&#x2F;MarekKowalski&#x2F;LiveScan3D</a><p>Now that Kinect v2 can be found for next to nothing and is very easy to mod to use without an expensive adaptor it&#x27;s a bit of a shame the project was abandoned, from what I&#x27;ve seen the bigger limitations of the project can be overcome (only one Kinect per PC, mainly).
评论 #37926294 未加载
rlt超过 1 年前
And as usual the first application of this new technology will be porn.<p>But seriously, this is killer technology for AR&#x2F;VR.
coffeebeqn超过 1 年前
Wow that site really killed my phone for a minute or so
评论 #37924078 未加载
评论 #37924960 未加载
评论 #37924570 未加载
评论 #37924907 未加载
评论 #37926045 未加载
darknavi超过 1 年前
Always fun to see ImGui used in random projects. What a gift to software engineers everywhere!
评论 #37924354 未加载
r3trohack3r超过 1 年前
I&#x27;m skeptical.<p>The code page leads to a repository that just has a README.md saying the source code is &quot;coming soon&quot;<p>If it actually works, this is huge. I&#x27;d be using it tomorrow.<p>But that first demo gif strikes me as something being off.<p>The algorithm isn&#x27;t picking up on the legs in the background painted on the wall... In the paper, I don&#x27;t understand how what they&#x27;ve built could differentiate between a picture of someone painted on a wall and the part of the scene that should be rendered in 3D.
melchebo超过 1 年前
I wonder what kind of rig is needed for recording that. It has to be at a least a few different viewpoints.
评论 #37926339 未加载
Hard_Space超过 1 年前
I think there has been some serious misinterpretation of what &#x27;real time&#x27; means in the context of this paper; and, possibly, that the researchers have avoided overt clickbait claims because they knew the term &#x27;real time&#x27; would do the work for them.<p>This is not some neural codec that can convert any novel or unseen object live, like a kind of 3D YOLO - the paper mentions that it requires up to 24 hours of training on a per case basis.<p>Nothing can be edited, no textures or movements - all you can do is remove people or speed them up or slow them down, and that&#x27;s been possible in NeRF for a few years now.
mattsan超过 1 年前
The speed of development in this space is incredible
avrionov超过 1 年前
Red Dwarf predicted this:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JMIHNiR3CP8">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JMIHNiR3CP8</a>
评论 #37933670 未加载
评论 #37952005 未加载
fullarr超过 1 年前
The effect is cool but I must be the only person on this website that doesn&#x27;t see a future for it<p>Seems very niche, with massive data size restrictions, making it difficult to broadcast or stream on existing infrastructure.<p>But even if you solved the infrastructure problem, it feels like a gimmick that would be uninteresting pretty quickly.<p>Sporting events maybe benefit a bit by being able to find the right angle for any shot, but honestly they will probably just find the best angle and post that video as a clip
评论 #37927362 未加载
评论 #37940308 未加载
shultays超过 1 年前
One of my favorite things in VR is google maps, I like &quot;walking&quot; around in cities without leaving my house. I am longing for the day that we can also do this
评论 #37930934 未加载
MPSimmons超过 1 年前
How would you stream the output of something like this, if you wanted to? So that people could continue to change the viewpoints.<p>You couldn&#x27;t possibly stream the full list of voxels generated by capturing the entire image with all of the cameras, right? That would probably exceed PCI bandwidth capabilities.<p>You&#x27;d need the server-side to generate models, send those models, and then stream the vectors?
kridsdale1超过 1 年前
I only first heard about this Gaussian Splat field at the start of this week, and it seems it has advanced a decade’s worth by Wednesday!
评论 #37930887 未加载
pk-protect-ai超过 1 年前
There was a paper related to NeRF dynamic scenes yesterday, but FPS and quality of this one is so superior!
spandextwins超过 1 年前
Soon I&#x27;ll just put on a headset, sit on my chair with my food tube and &#x27;bate all the time!
评论 #37930789 未加载
sheepscreek超过 1 年前
I imagine this can be used as an insanely efficient compression scheme. Transitions in videos may not need as many frames using this.
the8472超过 1 年前
Those samples don&#x27;t look like 4k to me.
评论 #37930243 未加载
rvz超过 1 年前
Now that is what I call, unreal. Literally.
cchance超过 1 年前
can&#x27;t sit down to read this now anyone know if this is using standard nerfs or gaussians?
评论 #37925450 未加载
icyriver2023超过 1 年前
nice
sosodev超过 1 年前
Does anybody else get the impression that holograms are inevitable? This type of tech seems like the medium now all we need is a good way of displaying them.
评论 #37924010 未加载
评论 #37924039 未加载
评论 #37924668 未加载