TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Increasing 3d-Games-Rendering by factor 50

2 点作者 seemann超过 16 年前
Hi Hackers,<p>Sounds Unreal?(sorry for the bad pun, am German) Well, did you know that the Human eye only percepts approx. 2° of it's total vision(110°x170°) clearly? Through multiple jumps per second of the eye(saccades) we combine the sharp spots(highest bitrate/second<i>mm²) in our brain to an overall sharp and steady picture.<p>My question is: Using a given high end 3d-Graphic-Card and an Eye-Tracker, one could concentrate half of the Render Power on this spot, which is usually displayed on a 20 to 40 ° degree screen. So take the worst case: 0.5 </i> 20°/2° * 20°/2° = 50. And there is my factor 50.<p>So why aren't we using this device: The Eye-Tracker cost about 15000 to 40000 $. And lots of the Render-Force is used for focus independent Rendering (reflection, etc..).<p>But: The price could be reduced by mass-production within a few years to 100 $, and probably 10 % of the Render-Power would last for the non clear vision degrees, so one could take 40% for independent rendering.<p>I came up with this idea after understanding objects and though about how the brain could save render power by only checking parts of a new object and recognizing it as real, and afterwards simulating it in its own Semantic-3d-Room Matrix-Engine.<p>Post me if you find the idea cool, have questions, or tell me that I am far outdated(never heard or read about such a application)!<p>Regards

2 条评论

jws超过 16 年前
I suspect latency will be a problem. As your eye scans around things will "come in to focus" and that will be noticeable.<p>Another issue is how wide the normal eye jitter is. There is a normal 30-100Hz jitter in eye aiming that will require your high quality area to be larger than 2°, but I don't know how much.<p>One last trick, the low resolution areas will need to be the correct average brightness. For many surfaces this is easy, but if you imagine a surface with a tiny, bright reflection... you could miss that until you processed it at full resolution making a bright spot that only exists if you look at it.
评论 #338757 未加载
corysama超过 16 年前
Like all great ideas, someone has thought of this before. This idea was researched quite a bit 10 years ago -before hardware vertex transformation took off and view dependent LOD was still a hot topic. I haven't heard much about it since then. Here is the only paper I could find in a minute's googling: <a href="http://www.svi.cps.utexas.edu/EI466209.pdf" rel="nofollow">http://www.svi.cps.utexas.edu/EI466209.pdf</a>
评论 #338753 未加载