TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Former Nvidia Dev's Thoughts on Vulkan/Mantle

442 pointsby phoboslababout 10 years ago

18 comments

pavlovabout 10 years ago
Very interesting post.<p><i>So ... the subtext that a lot of people aren&#x27;t calling out explicitly is that this round of new APIs has been done in cooperation with the big engines. The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.</i><p>This begs the question: what about DirectX 12 and Apple&#x27;s Metal? If they didn&#x27;t have similar engine developer involvement, that&#x27;s clearly a point against DX12&#x2F;Metal support in the long run.<p>Also, the end is worth quoting in its entirety to explain why the APIs are taking such a radical change from the traditional rendering model:<p><i>Personally, my take is that MS and ARB always had the wrong idea. Their idea was to produce a nice, pretty looking front end and deal with all the awful stuff quietly in the background. Yeah it&#x27;s easy to code against, but it was always a bitch and a half to debug or tune. Nobody ever took that side of the equation into account. What has finally been made clear is that it&#x27;s okay to have difficult to code APIs, if the end result just works. And that&#x27;s been my experience so far in retooling: it&#x27;s a pain in the ass, requires widespread revisions to engine code, forces you to revisit a lot of assumptions, and generally requires a lot of infrastructure before anything works. But once it&#x27;s up and running, there&#x27;s no surprises. It works smoothly, you&#x27;re always on the fast path, anything that IS slow is in your OWN code which can be analyzed by common tools. It&#x27;s worth it.</i><p>I wonder if&#x2F;when web developers will reach a similar point of software layer implosion. &quot;Easy to code against, but a bitch and a half to debug or tune&quot; is an adequate description of pretty much everything in web development today, yet everyone keeps trying to patch it over with even more &quot;easy&quot; layers on top. (This applies to both server and client side technologies IMO.)
评论 #9194138 未加载
评论 #9194028 未加载
评论 #9194927 未加载
评论 #9194237 未加载
评论 #9194026 未加载
评论 #9195440 未加载
评论 #9194642 未加载
评论 #9194190 未加载
评论 #9194277 未加载
评论 #9194274 未加载
评论 #9194103 未加载
nearabout 10 years ago
The idea that nVidia and AMD are detecting your games, and then replacing shaders, optimizing around bugs, etc should be absolutely terrifying. And vice versa, the idea of having to fix every major game&#x27;s incredibly broken code is equally terrifying. And it&#x27;s a huge hit to all of us indie devs, who don&#x27;t get the five-star treatment to maximize performance of our games. So overall, I&#x27;d say this is a step in the right direction.<p>However! Having written OpenGL 3 code before, the idea that Vulkan is going to be a lot <i>more</i> complex, frankly scares the hell out of me. I&#x27;m by no means someone that programs massive 3D engines for major companies. I just wanted to be able to take a bitmap, optionally apply a user-defined shader to the image, stretch it to the size of the screen, and display it. (and even without the shaders, even in 2015, filling a 1600p monitor with software scaling is a <i>very</i> painful operation. Filling a 4K monitor in software is likely not even possible at 60fps, even without any game logic added in.)<p>That took me several weeks to develop just the core OpenGL code. Another few days for each platform interface (WGL&#x2F;Windows, CGL&#x2F;OSX, GLX&#x2F;Xorg). Another few days for each video card&#x27;s odd quirks. In total, my simple task ended up taking me 44KB of code to write. You may think that&#x27;s nothing, but tiny code is kind of my forte. My ZIP decompressor is 8KB, PNG decompressor is another 8KB (shares the inflate algorithm), and my HTTP&#x2F;1.1 web server+client+proxy with a bunch of added features (run as service, pass messages from command-line via shared memory, APIs to manipulate requests, etc) is 24KB of code.<p>Now you may say, &quot;use a library!&quot;, but there really isn&#x27;t a library that tries to do just 2D with some filtering+scaling. SDL (1.2 at least) just covers the GL context setup and window creation: you issue your own GL commands to it. And anything more powerful ends up being entire 3D engines like Unity that are like using a jack hammer to nail in drywall.<p>And, uh ... that&#x27;s kind of the point of what I&#x27;m doing. I&#x27;m someone trying to <i>make</i> said library. But I don&#x27;t think I&#x27;ll be able to handle the complexity of all these new APIs. And I&#x27;m also not a big player, so few people will use my library anyway.<p>So the point of this wall of text ... I really, really hope they&#x27;ll consider the use case of people who just want to do simple 2D operations and have something official like Vulkan2D that we can build off of.<p>Also, I haven&#x27;t seen Vulkan yet, but I really hope the Vsync situation is better than OpenGL&#x27;s &quot;set an attribute, call an extension function, and cross your fingers that it works.&quot; It would be really nice to be able to poll the current rendering status of the video card, and drive all the fun new adaptive sync displays, in a portable manner.
评论 #9195158 未加载
评论 #9196563 未加载
评论 #9195446 未加载
评论 #9195827 未加载
lambdaabout 10 years ago
<p><pre><code> These are the vanishingly few people who have actually seen the source to a game, the driver it&#x27;s running on, and the Windows kernel it&#x27;s running on, and the full specs for the hardware. Nobody else has that kind of access or engineering ability. </code></pre> One option is to release the source of the drivers, making it possible for motivated engine developers to do this without explicit access to AMD developers.<p>If they did this on Linux as well, then developers would have access to the full stack, in order to be able to learn about how the lower levels work and more easily track the problems down without simply a whole lot of trial and error on a black box.
评论 #9194950 未加载
评论 #9195810 未加载
评论 #9194355 未加载
korethrabout 10 years ago
To me, this neatly explains why in just about every performance comparison of video drivers in Linux shows the proprietary drivers having an edge, even if only a slight one.<p>I&#x27;ve never actually dived into the source code for the open source video drivers but I&#x27;m now curious how much time the devs of the open-source drivers have to spend on anticipating and routing around the brain damage of the programs calling them. Do they similarly try to find a way to correctly do the right thing despite the app crashing if it finds out it didn&#x27;t get the wrong thing as asked for? Or is the attitude more akin to &#x27;keep your brain damage out of our drivers and go fix your own damned bugs&#x27;?
评论 #9194414 未加载
gavanwooleryabout 10 years ago
Performance gains are worth any trouble IMO, here is why:<p>While at GDC I saw a DirectX 12 API demo (DX12 is more&#x2F;less equivalent to Vulkan from an end-goal perspective). On a single GTX 980:<p>DX11 was doing ~1.5 million drawcalls per second. DX12 was doing ~15 million drawcalls per second.<p>This API demo will ship to customers, so I am pretty sure we can easily verify if these are bunk figures. But a potential 10x speedup, even if under ideal conditions, is notable.
评论 #9194615 未加载
评论 #9195079 未加载
pothiboabout 10 years ago
&gt; What has finally been made clear is that it&#x27;s okay to have difficult to code APIs, if the end result just works.<p>So true and yet, we have all those crazy JavaScript frameworks trying to abstract everything away from developers. There&#x27;s a lesson in there.
评论 #9194318 未加载
MAGZineabout 10 years ago
I&#x27;ve been discussing this a fair amount with a colleague, as I&#x27;m curious to see further uptake on linux as a gaming platform. I think the biggest barrier for this is when games are written for D3D and all of a sudden your game cannot communicate to any graphics API outside of windows.<p>More companies are starting to support OpenGL, but I&#x27;m just curious as to why uptake is so slow. It seems like poor API design may be a part of it. I&#x27;d like to see more games written with OpenGL support, and I think it&#x27;s happening slowly but surely. We even see weird hacks add Linux support at this point... Valve opensourced a D3D -&gt; OGL translation layer[0], though hasn&#x27;t supported it since dumping it from their source.<p>[0] <a href="https://github.com/ValveSoftware/ToGL" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ValveSoftware&#x2F;ToGL</a>
评论 #9194010 未加载
vehementiabout 10 years ago
&quot;Former Nvidia dev&quot; - s&#x2F;he did an internship there. Maybe too weighty a HN title.
评论 #9195052 未加载
alricbabout 10 years ago
For reference, here&#x27;s a tutorial on drawing a triangle in Apple&#x27;s Metal, which follows some of the design principles as Vulkan, Mantle and DX12: <a href="http://www.raywenderlich.com/77488/ios-8-metal-tutorial-swift-getting-started" rel="nofollow">http:&#x2F;&#x2F;www.raywenderlich.com&#x2F;77488&#x2F;ios-8-metal-tutorial-swif...</a><p>So, essentially, no validation, you have to manage your own buffers (with some help in DX12 I think), you can shoot yourself in the foot all day long. But if you manage to avoid that, you are able to reduce overhead and use multithreading.
jwildeboerabout 10 years ago
Former intern that admittedly sucked at his job gets promoted to NVIDIA Dev by HN moderators because clickbait. Srsly?
评论 #9195327 未加载
jokoonabout 10 years ago
What an horror show.<p>I always was interested in game programming, but never was able to really get interested enough in graphics programming, I guess having a messy API is not an excuse, but you can really sense that CPUs and GPUs really have different compatibility stories, and that&#x27;s maybe why it&#x27;s not attracting enough programmers.<p>I still hope that one day there might some unified compute architecture and CPUs will get obsolete. Maybe a system can be made usable while running on many smaller cores ? Computers are being used almost exclusively for graphic application nowadays, I wonder if having fast single core with fat caches really matters anymore.
评论 #9195806 未加载
评论 #9195302 未加载
higherpurposeabout 10 years ago
From the looks of it, it seems Khrono&#x27;s API may actually be significant better&#x2F;easier to use than DirectX?<p>I haven&#x27;t heard of DX12 getting overhauled for efficient multi-threading or great multi-GPU support. DX12 probably brings many of the same improvements Mantle brought, but Vulkan seems to go quite a bit beyond that. Also, I assume DX12 will be stuck with some less than pleasant DX10&#x2F;DX11 legacy code as well.
评论 #9194052 未加载
评论 #9193976 未加载
评论 #9193977 未加载
zurnabout 10 years ago
It&#x27;s telling about the whole graphics programming scene that this has been such a well kept secret from the public. Games you see are not in reality running on open APIs, but are based on back alley arrangements between insiders camouflaged as open API apps. I bet this post is very demoralizing eg. to hopeful indie game devs or people holding out hope for driver situation improving on Linux.
评论 #9196718 未加载
评论 #9196935 未加载
评论 #9196689 未加载
vcarlabout 10 years ago
This is interesting, it&#x27;s good to hear the opinion of somebody who&#x27;s actually programmed against the APIs. I think the increase in exposed complexity is probably a good thing. The AAA studios have proved that they&#x27;re able and willing to throw engineering resources at tough problems, so actually allowing them to directly interact with a lower level of code is probably a good thing.<p>I&#x27;m sure there&#x27;s a counterargument that it raises the bar for indie game devs, but when&#x27;s the last time an indie game directly programmed against DirectX or OpenGL (barring webGL)? This should let engine developers better use their development time.
评论 #9194319 未加载
评论 #9197802 未加载
faragonabout 10 years ago
That&#x27;s evolution. New graphics APIs are intended for engine developers, not for application programming. Application should use higher level engines, not low level APIs.
评论 #9195343 未加载
ggchappellabout 10 years ago
&gt; Part of the goal is simply to stop hiding what&#x27;s actually going on in the software from game programmers. Debugging drivers has never been possible for us, which meant a lot of poking and prodding and experimenting to figure out exactly what it is that is making the render pipeline of a game slow.<p>This is really sad. Imagine if someone were pushing a new file API with the justification that storage-device drivers were full of bugs.<p>I get a similar feeling whenever I read one of those &quot;<i>CSS trick that works on all browsers!</i>&quot; articles. Yes, it&#x27;s nice that you can build a thing of beauty on top of broken abstractions, but ....
backinsideabout 10 years ago
How come, that the open source drivers did not show much better performance?
评论 #9194128 未加载
评论 #9194308 未加载
anon4about 10 years ago
Reminds me a bit of what I recall as a unix principle: The API should be designed so that the implementation is simple, even if that makes the use complex.