I really don't understand why they couldn't just use traditional blur techniques. They say:<p>> These fast but inaccurate methods of creating “game blur” ran counter to Half Dome’s mission, which is to faithfully reproduce the way light falls on the human retina.<p>... but traditional Z-based blur is no less faithful than their overall "whole screen at single shifting plane of focus depending on gaze" approach. All of computer graphics is "more to do with cinematography than realism" anyway, realism is nice but if you have to choose between "looks realistic" and "looks good" you go for "looks good" every time.<p>Also, as others have mentioned, getting sufficient resolution for really high quality VR basically requires foveated rendering, at which point the bits that you're blurring are, by definition, not what you're looking at (since you're rendering at lower resolution outside of the fovea) and a blur algorithm that needs four GPUs to run in realtime is a complete waste of resources.<p>Edit: Watched the video. Their 'Circle of Confusion' map is literally just 'focus_Z - pixel_Z'. I really don't see what deep learning adds here.