Since I'm writing a new client, in Rust, for Second Life/Open Simulator, I'm very aware of these issues.<p>A metaverse client for a high-detail virtual world has most of the problems of an MMO client plus many of the problems of a web browser. First, much of what you're doing is time-sensitive. You have a stream of high-priority events in each direction that have to be dealt with quickly but don't have a high data volume. Then you have a lot of stuff that's less time critical.<p>The event stream is usually over UDP in the game world. Since you might lose a packet, that's a problem.
Most games have "unreliable" packets, which, if lost, are superseded by later packets. ("Where is avatar now" is a typical use.) You'd like to have that stream on a higher quality of service than the others, if only ISPs and routers actually paid attention to that.<p>Then you have the less-critical stuff, which needs reliability. ("Object X enters world" is a typical use.)
I'd use TCP for that, but Second Life has its own not very good UDP-based protocol, with a fixed retransmit timer. Reliable delivery, in-order delivery, no head of line blocking - pick two. TCP chooses the first two, SL's protocol chooses the first and third ones. Out of order delivery after a retransmit can cause avatars to lose clothing items, because the child item arrived before the parent item.<p>Then you have asset fetching. In Second Life/Open Simulator this is straight HTTP/1. But there are some unusual tricks. Textures are stored in progressive JPEG 2000. It's possible to open a connection and just read a few hundred bytes to get a low-rez version. Then, the client can stop reading for a while, put the low-rez version on screen, and wait to see if there's a need to keep reading, or just close the connection because a higher-rez version is not needed. The poor server has to tolerate a large number of stalled connections. Worse, the actual asset servers on AWS are front-ended by Akamai, which is optimized for browser-type behavior. Requesting an asset from an Akamai cache results in fetching the entire asset from AWS, even if only part of it is needed. There's a suspicion that large numbers of partial reads and stalled reads from clients sometimes causes Akamai's anti-DDOS detection to trip and throttle the data flow.<p>So those are just some of the issues "the HTTP of VR" must handle. Most are known to MMO designers. The big difference in virtual worlds is there's far more dynamic asset loading. How well that's managed has a strong influence on how consistent the world looks. It has to be constantly re-prioritized as the viewpoint moves.<p>(Demo, from my own work: <a href="https://vimeo.com/user28693218" rel="nofollow">https://vimeo.com/user28693218</a> This shows the client frantically trying to load the textures from the network before the camera gets close. Not all the tricks to make that look good are in this demo.)<p>It's not an overwhelmingly hard problem, but botch it and you will be laughed off Steam.