From what we've seen, it supports dynamic 2-D content in the form of being able to interact with 2d windows but place them in 3d space.<p>It supports pre-rendered 3d content such as memories or recordings through special devices - however neither the presentation nor the press release have examples of creating these on the fly or interacting with 3D content.<p>Given what we know from the presentation:<p>- that there's a separate real time subsystem running on the R1 chip which actually renders the environment (which presumably developers will not be allowed access to)<p>- that there is an M2 chip but no dedicated graphics card<p>- and that in general it is incredibly expensive to dynamically render anything at 2x 5k resolution<p>apps like static overlays over environments seem feasible to me, but not necessarily the mixed-reality app where (as a paltry example), you play an FPS inside your home or something. Or even something less demanding where you practice as a surgeon in 3d and slice open a realistic body.<p>Not very familiar with VR - is this a solvable problem with the current generation of hardware we have?