Relevant: You can actually perceive the rough size of your fovea with this optical illusion: <a href="https://www.shadertoy.com/view/4dsXzM" rel="nofollow">https://www.shadertoy.com/view/4dsXzM</a>
This is a fun experiment, but reminded me why I didn't like 3D movies:<p>You need to look right at where <i>they</i> want you to look - so in this example you need to look right at the center point of the image. If your gave drifts off to the side then it looks bad.<p>The same with 3D movies, if you don't want to look directly at what the director wanted you to look at, you end up getting seasick.<p>Do VR headsets have gaze tracking these days?
Tangentially related: I've recently been doing some VR work in Unreal Engine, and also reading <i>Blindsight</i> by Peter Watts, and this made me wonder: are saccades truly random? Could they be <i>made</i> predictable through some special way of crafting a headset, or through some stimulation? If so, then perhaps one day we'll be able to render only the pixels the mind can actually perceive at any given moment.
> Unlike normal programming languages, fragment shaders always execute both parts of each branch due to gpu limitations.<p>GPUs do have real branching and even loops these days, you just can't have divergent branching within a shader group.<p>I'm not sure how efficient it is to splatter individual pixels like in the mask used in the article since adjacent pixels will likely need to be evaluated anyway if the shader uses derivatives. Too bad that the author didn't bother to include any performance numbers.
27.5MB, of which 27MB is five GIFs. Please don’t use GIFs like this. Use videos: they use less bandwidth, use less power, look better, and annoy your “I don’t <i>want</i> videos to autoplay!” users less.
It would be cool to also see a demo where focus follows the face of the bouncing creature, to simulate where the observer would most likely be looking at. Maybe add a red dot, and tell people to follow it to simulate gaze tracking.
> <i>How could this scheme improve if we had access to the internals of the 3d scene? For example, could we adjust our sampling pattern based on depth information?</i><p>If we had access to the internals, we could determine the visual salience of every object[1] and move the sampling pattern closer to the most salient one. Since that object is more likely to attract the viewer's attention, it would focus the rendering on those parts of the scene that the viewer actually cares about.<p>[1] <a href="http://doras.dcu.ie/16232/1/A_False_Colouring_Real_Time_Visual_Saliency_Algorithm_for_Reference_Resolution_in_Simulated_3-D_Environments.pdf" rel="nofollow">http://doras.dcu.ie/16232/1/A_False_Colouring_Real_Time_Visu...</a>
Related fun: MediaPipe has an iris tracking demo which can run in browser: <a href="https://viz.mediapipe.dev/demo/iris_tracking" rel="nofollow">https://viz.mediapipe.dev/demo/iris_tracking</a> (top-right run button); blog[1].<p>Maybe "which 3/4 of the laptop screen needn't be rendered/updated fully"? Or "unclutter the desktop - only reveal the clock when it's looked at"? Or "they're looking at the corner - show the burger menu"? Though smoothed face tracking enables far higher precision pointing than ad hoc eye tracking. This[2] fast face tracking went by recently and looked interesting.<p>[1] <a href="https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html" rel="nofollow">https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-i...</a>
[2] <a href="https://news.ycombinator.com/item?id=24332939" rel="nofollow">https://news.ycombinator.com/item?id=24332939</a>
I wonder how it would look if the noise was randomized each frame and the points from the previous frame were considered in future frames to cheaply fill in missing data. I believe a similar technique is used to optimize real time ray tracing due to the low number of pixels that can be traced each frame.
Given that your eye "wants" to follow the action, it would be cool if the foveal focus moved with the character, no? Then if you really didn't "notice" the de-rezed background, that would be the proof of the pudding without having eye-tracking. (At least for scenes with a single action-focus.)
Super cool. The problem, of course, is that this technique presumes that the eye remains fixed in the center of the screen. It doesn’t. But combine this with an eye tracking system that can adjust the screen space fovea center in real time, and you’re laughing.
Cool!<p>Seems like it would be very interesting to see how changing the sample positions per frame and some sort of technique ala temporal AA would do.
I don't get the point of it. It could be my screen size (15" laptop) or resolution (4k), but the gifs looked terrible even when I stared exclusively at the center of the image.