I think the author has good intentions (don't we all), but I don't think he understands enough about graphics programming to make some of the proclamations/requests in the article. Not that I blame him...it's hard to get an understanding of GPU programming outside the scope of a game dev or IHV.<p>> To define an object’s appearance in a 3D scene, real-time graphics applications use shaders...
Eh, the shaders are just a part of the GPU pipeline that transforms your vertices, textures, and shaders into something interesting on the screen.<p>This is already oversimplifying what GPUs are trying to do for the base case of graphics.<p>> the interface between the CPU and GPU code is needlessly dynamic, so you can’t reason statically about the whole, heterogeneous program.<p>Ok, so what is the proposed solution here? You have a variety of IHVs (NV, AMD, Intel, ImgTec, ARM, Samsung, Qualcomm, etc). Each vendor has a set of active architectures that each have their own ISA. And even then, there are sub-archs that likely require different accommodations in ISA generation depending on the sub-rev.<p>So in the author's view of just the shader code, you already have the large problem of unifying the varieties of ISA under some...homogenous ISA, like an x86. That's a non-trivial problem. What's the motivation here? How will you get vendors to comply?<p>I think right now, SPIR-V, OpenCL, and CUDA aren't doing a _bad_ job in trying to create a common programming model where you can target multiple hardware revs with some intermediate representation, but until all the vendors team up and agree on an ISA, I don't see how to fix this.<p><i>On top of that</i>, that isn't even really the only important bit of programming that happens on GPUs. GPUs primarily operate on command buffers, of which, there is nary a mention of in the article. So even if we address the shader cores inside the GPU, what about a common model for programming command buffers directly? Good luck getting vendors to unify on that. Vulkan/DX12/Metal are good (even great) efforts in exposing the command buffer model. You couldn't even _see_ this stuff in OpenGL and pre-DX12 (though there were display lists and deferred contexts, which kinda exposed the command buffer programming model).<p>> To use those parameters, the host program’s first step is to look up location handles for each variable...<p>Ok, I don't blame the author for complaining about this model, but this is an introductory complaint. You can bind shader inputs to 'registers', which map to API slots. So with some planning, you don't need to query location handles if you specify them in the shader in advance. I think this functionality existed in Shader Model 1.0, though I can't find any old example code for it (2001?).<p>That being said, I certainly don't blame the author for not knowing this, as I think this is a common mistake made by introductory graphics programmers, because the educational resources are poor. I don't think I ever learned it in school...only in the industry was this exposed to me, to my great joy. Though I am certain many smarter engineers figured it out unprompted.<p>> OpenGL’s programming model espouses the simplistic view that heterogeneous software should comprise multiple, loosely coupled, independent programs.<p>Eh, I don't think I want a common programming model across CPUs and GPUs. They are fundamentally different machines, and I don't think it makes sense to try to lump them together. I don't think we just assume that we can use the same programming methodologies for a single vs multi-threaded program. I know that plenty tried, but I thought the best method of addressing the differences was education and tools. I'd advocate that most effective way that GPU programming will become more accessible will be education and tools. I have hope that the current architecture of the modern 'explicit' API will facilitate that movement.