TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Compute feasibility of dynamic 3D experiences with the Vision Pro?

2 pointsby hahaxdxd123almost 2 years ago
From what we&#x27;ve seen, it supports dynamic 2-D content in the form of being able to interact with 2d windows but place them in 3d space.<p>It supports pre-rendered 3d content such as memories or recordings through special devices - however neither the presentation nor the press release have examples of creating these on the fly or interacting with 3D content.<p>Given what we know from the presentation:<p>- that there&#x27;s a separate real time subsystem running on the R1 chip which actually renders the environment (which presumably developers will not be allowed access to)<p>- that there is an M2 chip but no dedicated graphics card<p>- and that in general it is incredibly expensive to dynamically render anything at 2x 5k resolution<p>apps like static overlays over environments seem feasible to me, but not necessarily the mixed-reality app where (as a paltry example), you play an FPS inside your home or something. Or even something less demanding where you practice as a surgeon in 3d and slice open a realistic body.<p>Not very familiar with VR - is this a solvable problem with the current generation of hardware we have?

no comments

no comments