TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Fooling Around with Foveated Rendering

175 点作者 underanalyzer超过 4 年前

16 条评论

modeless超过 4 年前
Relevant: You can actually perceive the rough size of your fovea with this optical illusion: <a href="https:&#x2F;&#x2F;www.shadertoy.com&#x2F;view&#x2F;4dsXzM" rel="nofollow">https:&#x2F;&#x2F;www.shadertoy.com&#x2F;view&#x2F;4dsXzM</a>
评论 #24695633 未加载
评论 #24698881 未加载
评论 #24695645 未加载
评论 #24697712 未加载
评论 #24699279 未加载
评论 #24698043 未加载
mattlondon超过 4 年前
This is a fun experiment, but reminded me why I didn&#x27;t like 3D movies:<p>You need to look right at where <i>they</i> want you to look - so in this example you need to look right at the center point of the image. If your gave drifts off to the side then it looks bad.<p>The same with 3D movies, if you don&#x27;t want to look directly at what the director wanted you to look at, you end up getting seasick.<p>Do VR headsets have gaze tracking these days?
评论 #24698342 未加载
评论 #24696543 未加载
评论 #24696465 未加载
评论 #24696020 未加载
评论 #24697503 未加载
评论 #24698535 未加载
评论 #24695718 未加载
TeMPOraL超过 4 年前
Tangentially related: I&#x27;ve recently been doing some VR work in Unreal Engine, and also reading <i>Blindsight</i> by Peter Watts, and this made me wonder: are saccades truly random? Could they be <i>made</i> predictable through some special way of crafting a headset, or through some stimulation? If so, then perhaps one day we&#x27;ll be able to render only the pixels the mind can actually perceive at any given moment.
评论 #24696315 未加载
评论 #24699137 未加载
评论 #24698761 未加载
评论 #24696029 未加载
评论 #24696225 未加载
account42超过 4 年前
&gt; Unlike normal programming languages, fragment shaders always execute both parts of each branch due to gpu limitations.<p>GPUs do have real branching and even loops these days, you just can&#x27;t have divergent branching within a shader group.<p>I&#x27;m not sure how efficient it is to splatter individual pixels like in the mask used in the article since adjacent pixels will likely need to be evaluated anyway if the shader uses derivatives. Too bad that the author didn&#x27;t bother to include any performance numbers.
评论 #24698263 未加载
chrismorgan超过 4 年前
27.5MB, of which 27MB is five GIFs. Please don’t use GIFs like this. Use videos: they use less bandwidth, use less power, look better, and annoy your “I don’t <i>want</i> videos to autoplay!” users less.
评论 #24696254 未加载
metafunctor超过 4 年前
It would be cool to also see a demo where focus follows the face of the bouncing creature, to simulate where the observer would most likely be looking at. Maybe add a red dot, and tell people to follow it to simulate gaze tracking.
评论 #24696092 未加载
probably_wrong超过 4 年前
&gt; <i>How could this scheme improve if we had access to the internals of the 3d scene? For example, could we adjust our sampling pattern based on depth information?</i><p>If we had access to the internals, we could determine the visual salience of every object[1] and move the sampling pattern closer to the most salient one. Since that object is more likely to attract the viewer&#x27;s attention, it would focus the rendering on those parts of the scene that the viewer actually cares about.<p>[1] <a href="http:&#x2F;&#x2F;doras.dcu.ie&#x2F;16232&#x2F;1&#x2F;A_False_Colouring_Real_Time_Visual_Saliency_Algorithm_for_Reference_Resolution_in_Simulated_3-D_Environments.pdf" rel="nofollow">http:&#x2F;&#x2F;doras.dcu.ie&#x2F;16232&#x2F;1&#x2F;A_False_Colouring_Real_Time_Visu...</a>
mncharity超过 4 年前
Related fun: MediaPipe has an iris tracking demo which can run in browser: <a href="https:&#x2F;&#x2F;viz.mediapipe.dev&#x2F;demo&#x2F;iris_tracking" rel="nofollow">https:&#x2F;&#x2F;viz.mediapipe.dev&#x2F;demo&#x2F;iris_tracking</a> (top-right run button); blog[1].<p>Maybe &quot;which 3&#x2F;4 of the laptop screen needn&#x27;t be rendered&#x2F;updated fully&quot;? Or &quot;unclutter the desktop - only reveal the clock when it&#x27;s looked at&quot;? Or &quot;they&#x27;re looking at the corner - show the burger menu&quot;? Though smoothed face tracking enables far higher precision pointing than ad hoc eye tracking. This[2] fast face tracking went by recently and looked interesting.<p>[1] <a href="https:&#x2F;&#x2F;ai.googleblog.com&#x2F;2020&#x2F;08&#x2F;mediapipe-iris-real-time-iris-tracking.html" rel="nofollow">https:&#x2F;&#x2F;ai.googleblog.com&#x2F;2020&#x2F;08&#x2F;mediapipe-iris-real-time-i...</a> [2] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=24332939" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=24332939</a>
dmos62超过 4 年前
That&#x27;s nice. Using eye tracking to optimize rendering workloads is low hanging fruit, I think. Hope to see it adopted by video games.
评论 #24696547 未加载
gh123man超过 4 年前
I wonder how it would look if the noise was randomized each frame and the points from the previous frame were considered in future frames to cheaply fill in missing data. I believe a similar technique is used to optimize real time ray tracing due to the low number of pixels that can be traced each frame.
评论 #24699828 未加载
评论 #24700105 未加载
thelazydogsback超过 4 年前
Given that your eye &quot;wants&quot; to follow the action, it would be cool if the foveal focus moved with the character, no? Then if you really didn&#x27;t &quot;notice&quot; the de-rezed background, that would be the proof of the pudding without having eye-tracking. (At least for scenes with a single action-focus.)
putzdown超过 4 年前
Super cool. The problem, of course, is that this technique presumes that the eye remains fixed in the center of the screen. It doesn’t. But combine this with an eye tracking system that can adjust the screen space fovea center in real time, and you’re laughing.
magicalhippo超过 4 年前
Cool!<p>Seems like it would be very interesting to see how changing the sample positions per frame and some sort of technique ala temporal AA would do.
评论 #24696493 未加载
jmiskovic超过 4 年前
Quite interesting. Are results available somewhere? The GIF is too compressed to evaluate this kind of content.
_xerces_超过 4 年前
I don&#x27;t get the point of it. It could be my screen size (15&quot; laptop) or resolution (4k), but the gifs looked terrible even when I stared exclusively at the center of the image.
eps超过 4 年前
Peter, please reduce the font size. It&#x27;s absolutely ginormous. The page is completely unreadable without switching to the Reader mode.
评论 #24695784 未加载
评论 #24695600 未加载
评论 #24695693 未加载
评论 #24695754 未加载