TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Nvidia Turing GPU Architecture Deep Dive: Prelude to GeForce RTX

120 pointsby kartDover 6 years ago

9 comments

jaytaylorover 6 years ago
So the new top-end Nvidia cards will have dedicated ray tracing cores. However, real-time ray tracing is still so computationally expensive that games can only implement a hybrid form of it whereby ray tracing is applied for a certain effect or single object, and plain old rasterization is used for everything else.<p>I applaud NV for stepping up and delivering something in a new direction. Just think- how long has it been since something truly new has been introduced in the gfx world?<p>Reading through the full article, it was no small feat to dream up and build these cards. A very complex project from both product and engineering perspectives. Hardware-wise, the 2080 specs are quite insane, and these babies are thirrrsty, drawing ~250 watts.<p>That said, the move definitely seems risky, as it increases complexity significantly for game devs to code for this hybrid approach. What if the market doesn&#x27;t warm up to this, or perhaps ATI or someone else comes up with something more novel?<p>I also wonder if it&#x27;s just a few years premature. Not feeling compelled to give up the good old 970GTX yet. Wake me up when full ray tracing is ready :)<p>---<p>P.S. I couldn&#x27;t help but snicker when looking at the table near the end showing which games will support what new imaging modes.<p>Of course, PUBG doesn&#x27;t support ray tracing. Hardly a surprise, considering they can&#x27;t patch without hours of downtime, and also frequently deliver patches containing &quot;fixes&quot; which break more than they fix.<p>(FWIW I&#x27;ve stopped PUBG and moved on to Elite:Dangerous, aka space truckers, thanks to it being recommended in an HN thread. Fun game, if you enjoy the solitude of loneliness of endless space! ;)
评论 #17997624 未加载
评论 #17998472 未加载
评论 #17997322 未加载
评论 #17997296 未加载
评论 #17997538 未加载
CoolGuySteveover 6 years ago
Why are RT cores so different than normal shader cores? What instructions&#x2F;memory fetch does a ray trace operation do that couldn&#x27;t be implemented as an added instruction set on the shader cores to navigate the volume tree?<p>From the article, the best I can see is the following, but can&#x27;t that be solved with microcode or as an extra rendering pipeline stage?<p>&gt; In comparison, traversing the BVH in shaders would require thousands of instruction slots per ray cast, all for testing against bounding box intersections in the BVH<p>I ask, because having more slightly larger general purpose cores seems better for traditional rendering and raytracing than dedicating all that die space to pure single-purpose RT cores.
评论 #17998631 未加载
评论 #17997702 未加载
jaytaylorover 6 years ago
On mobile all I see is the comments section.<p>Friendlier link:<p><a href="https:&#x2F;&#x2F;www.anandtech.com&#x2F;print&#x2F;13282&#x2F;nvidia-turing-architecture-deep-dive" rel="nofollow">https:&#x2F;&#x2F;www.anandtech.com&#x2F;print&#x2F;13282&#x2F;nvidia-turing-architec...</a>
评论 #17997234 未加载
评论 #17997270 未加载
评论 #17997341 未加载
marvinover 6 years ago
Seems to me like ray-traced rendering provides a feasible path to foveated rendering for VR, meaning much better performance for VR scenes at high resolutions. This would be a big deal for VR developers, since they don&#x27;t have to do unlikely amounts of magic to implement this. If NVIDIA is able to drag everyone along, they will get the hardware for this without making any huge strategic moves on their own part.
cjhanksover 6 years ago
Don&#x27;t forget that NVIDIA is not only a gaming company. They are involved in a lot of computational geometry fields, localization reconstruction, machine learning, etc.<p>There are a lot of use cases for ray tracing that are not games. So far NVIDIA has done a great job at changing their GPU architecture in a way that is mutually beneficial to all of their diverse customer base.
评论 #18005987 未加载
npuntover 6 years ago
The thing that strikes me with the RTX announcement is a general point about <i>how important identifying useful intermediate steps are to bringing about new paradigms</i>.<p>Unless a technological breakthrough is just around the corner, or you have the resources to push it forward (Space Race &#x2F; Manhattan Project), it’s better to spend your energy identifying useful intermediate steps that you can offer to the market to fund &amp; bridge yourself to the new paradigm. By having funding all along the way, you can gain a significant advantage to those pursuing the new elegance directly. [1]<p>A few examples:<p>STREAMING. People used to go to video stores to rent movies. As the internet emerged we dreamed of a new, more elegant paradigm: streaming. No more driving to a store, no more physical copy or late fees or damages, etc. But it was the clever discovery of an intermediate step - to use the internet to rent DVDs via mail - that created the brand and customer base that established the market leader (Netflix). Once internet infrastructure caught up, the switch was seamless. Meanwhile, there were many people who pursued streaming directly, but failed because they didn’t take the intermediate step (Broadcast.com).<p>ELECTRIC CARS. Traditional cars have <i>super</i> complex drivetrains. As battery tech improved, we dreamed of a new, more elegant paradigm of electric vehicles that improved efficiency and eschewed most moving parts, transmissions, exhaust systems, etc. But there existed a valuable, infrastructure-free intermediate step to get there: hybrids. Ironically they were even more complex, but they employed many new techs that helped move electric cars forward. Toyota has hugely benefitted from being the discoverer of this intermediate step. Obviously we now have Tesla leading the vanguard, but in the context of global development, nobody can predict if an Elon will show up in your generation.<p>AUGMENTED REALITY. Our current physical reality is awash in information - street signs, road paint, branding, menus, maps, clocks, games, warnings, nutrition labels, interfaces, etc. These are often completely irrelevant to us at a given time, and certainly not personalized to our needs. We dream of the day we can render overlays on our eyes to deliver the personalized versions of these (as well as entirely new things), which would over time mean our physical reality would get simpler, cleaner, and less wasteful. To deliver this elegant solution requires a lot of breakthroughs in display technology that are years if not decades away. Bundling SLAM tech into smartphones (looking at you Apple) and pursuing incremental use cases is an intermediate step that can grow the market until the point where the displays are ready, at which point those who best pursue this are likely to be the market leader.<p>Ray tracing is now on the same course. It&#x27;s been known for decades that it is a far more elegant paradigm to reason about and generate images (vis-a-vis rasterization), but its compute requirements are so high that there&#x27;s been this chasm people haven&#x27;t been able to cross to get to ray tracing. Nvidia has now provided a bridge between these two worlds, by allowing raytracing of parts of the rendering pipeline alongside rasterization. Subsequent generations will slowly swallow the remaining parts that rasterization performs today. Basically the RTX is the graphics card equivalent of a Prius, growing into a full electric.<p>The addition of ray-tracing cores in the RTX line was a pleasant surprise to me, not only because it speeds the development of ray-tracing hardware, but because it showed intermediate steps existed that I didn’t know about before. It showed me we weren’t stuck waiting indefinitely for a promise of an elegant future that always seems a decade away. Pretty exciting.<p>[1] What I mean by paradigm is not just incrementalism or an evolution of one product into another (like iPod -&gt; iPhone), but of a wholly different way to solve a problem that is more elegant &#x2F; higher abstraction than previous ways, but that require breakthroughs in enabling technologies to get there. Rockets -&gt; Space Elevators (material science; elegance is in ease of transport), Retail -&gt; Online Shopping (internet; elegance is in personalization + stay-at-home), Coal -&gt; Solar (energy storage; elegance is in eco footprint, low entry point &amp; simpler tech), Driving -&gt; Autonomous Driving (ML&#x2F;sensors; elegance is in time savings &#x2F; one less thing to learn &amp; simplification+density of road infra). This is admittedly a fuzzy definition, and perhaps these examples are not perfect.
评论 #17997748 未加载
rl3over 6 years ago
The Turing architecture is also used in Quadro RTX cards, and those have a ridiculous amount of VRAM.<p>Is there any professional&#x2F;computational use for these RT cores beyond raytracing?<p>One case that comes to mind is perhaps raytracing acoustics, and although interesting it&#x27;s technically still raytracing.<p>As far as gaming is concerned, personally I&#x27;d love if the RT cores could contribute—however inefficiently—to rendering workload in non-RTX games. It&#x27;s annoying that 50% of the die is allocated to hardware that requires feature-specific implementations.
评论 #17998689 未加载
评论 #17997836 未加载
评论 #17999300 未加载
beerlordover 6 years ago
We are at the end of the console cycle, which means games hardware requirements will plateau until the next gens are out. So from now, at 1080p, a 1070 is enough to handle <i>everything</i> at 60fps until then. That probably translates to a 2060 this generation - which is the card I&#x27;m really interested in.<p>Other than that Nvidia cards are severely restricted due to their lack of support for Adaptive Sync or HDMI 2.1 VRR.
评论 #17999841 未加载
TwoQover 6 years ago
Will be interesting to see if this goes the way of PhysX or not.