TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Compute feasibility of dynamic 3D experiences with the Vision Pro?

2 点作者 hahaxdxd123将近 2 年前
From what we&#x27;ve seen, it supports dynamic 2-D content in the form of being able to interact with 2d windows but place them in 3d space.<p>It supports pre-rendered 3d content such as memories or recordings through special devices - however neither the presentation nor the press release have examples of creating these on the fly or interacting with 3D content.<p>Given what we know from the presentation:<p>- that there&#x27;s a separate real time subsystem running on the R1 chip which actually renders the environment (which presumably developers will not be allowed access to)<p>- that there is an M2 chip but no dedicated graphics card<p>- and that in general it is incredibly expensive to dynamically render anything at 2x 5k resolution<p>apps like static overlays over environments seem feasible to me, but not necessarily the mixed-reality app where (as a paltry example), you play an FPS inside your home or something. Or even something less demanding where you practice as a surgeon in 3d and slice open a realistic body.<p>Not very familiar with VR - is this a solvable problem with the current generation of hardware we have?

暂无评论

暂无评论