I'm not an OpenGL expert or anything, but I get the impression the author doesn't really know what he's talking about and sounds a bit amateurish (I'm a bit hesitant to say that given it's a professor at Cornell..)<p>He seems to just hand-wave and says "well just use C you idiots". He criticizes Metal for using a version of C++14 that doesn't allow recursion, but offers no alternate solutions<p>The reason GLSL isn't C is because you can't do everything C does on a low end cellphone GPU - so obviously you have to restrict the language. The "cognitive load" of knowing the restrictions is overblown but also unavoidable.<p>SPIRV isn't even mentioned. DSL's are dismissed.<p>I didn't really follow what he didn't like about the preset input/output variables that are predefined in the different shaders. It's a bit ugly.. but that's also pretty much a non issue.
This author seems pretty clueless about how things actually work and a lot of the assumptions are just plain wrong. For example, ubershaders are preferred instead of many small shaders because the overhead of switching shaders and recompiling them is expensive. It is not something that people build because it is convenient (rather it is quite the opposite!) and then specialize with fixed parameters.
As someone who's a fan of PL languages(and spends a fair amount of time in the GPU space) I'm not sure I buy many of the arguments.<p>The reason that GPU drivers/APIs have few safety checks is that in graphics code, performance is valued above all else. Even simple calls can introduce overhead that's undesirable when you're making thousands of the same type of calls.<p>His example of baked shaders doesn't really seem to hold much value since interactive shader builders(ShaderToy, UE3/4, etc) are all content driven anyway so the extra code generation isn't a limiting constraint.<p>Nice effort but I don't see it solving actual pain points in production.
John Carmack weighed in on Twitter:<p>> ...some interesting thoughts, but the shading language is the least broken part of OpenGL.<p>> Lots of people consider automating the computation rate determination between fragment and vertex shaders, but it is a terrible idea.<p><a href="https://twitter.com/ID_AA_Carmack/status/851258064909070336" rel="nofollow">https://twitter.com/ID_AA_Carmack/status/851258064909070336</a>
A concrete problem that the author misses is the need for a better understanding of SPMD semantics. GLSL has the notion of "dynamically uniform" values, i.e. values that are the same across all shader threads that arise from one draw call, but this notion isn't really properly defined anywhere. It involves an unholy mixture of data flow and control flow that doesn't seem to appear anywhere else in PL theory.<p>Stuff kind of just works because GLSL doesn't have unstructured control-flow (i.e., there's no goto), and people have a mental model of what the hardware actually does and use that for the semantics.<p>But a proper study of those semantics, and how to carry it over to unstructured control-flow, or to what extent it is possible, would be awesome.
> Potential solutions. Shader languages’ needs are not distinct enough from ordinary imperative
programming languages to warrant ground-up domain-specific designs. They should
should instead be implemented as extensions to general-purpose programming languages.
There is a rich literature on language extensibility [27, 36, 39] that could let implementations
add shader-specific functionality, such as vector operations, to ordinary languages.<p>I like this part.
As someone who recently start learning to program GPUs, I enjoyed this read. I particularly find the concept of a linear algebra-aware type system compelling. I love the idea of the type system statically checking that I'm performing operation in and between correct vector spaces. Is the fact that Vulkan uses SPIR-V sufficient to support creation of languages to allow this to be implemented?
I don't think I fully understand the point about metaprogramming facilities.
Sure, it would be nice to have compile-time ifs that get eliminated from the generated code if the condition isn't met. But I don't think this necessarily solves the problem of "combinatoric explosion" of different shader variants - you still have to generate a separate chunk of code for each possible combination of compile-time conditions. Unless a corresponding change is proposed at the level of shader bytecode (SPIR-V), which probably leads to opening a can of worms.
While I agree with the points about type safety with uniforms/attributes, I've found that this class of bugs didn't happen to me all that often in practice. The bug that happens way more often is calculations in the wrong coordinate system (or using two values in different coordinate systems in the same calculation), which the author also points out.