The openbsd situation is even worse. over there the driver is bigger than the rest of the kernel.<p>Don't get me wrong I use the driver every day and AMD is definitely one of the good guys for making an open source driver and them who ported it are absolute heros. However.... Sometimes I wish AMD had tied down the isa to their cards a little better. Narrowed the interface if you would. because as it is the driver is so big because there is this combinatorial explosion of generated header files.<p><a href="https://flak.tedunangst.com/post/watc" rel="nofollow noreferrer">https://flak.tedunangst.com/post/watc</a>
I'm not sure I get why the comparison to the Kernel is needed. GPUs are wildly complex. Rendering is wildly complex. Managing memory and data is complex. Managing connected hardware is complex. I am not sure why anyone would expect a GPU Driver to be small while also doing a billion things and playing games as well as mature gaming platforms.
As a comparison:<p><pre><code> FreeBSD: ~9M loc
NetBSD: ~7M loc
OpenBSD: ~3M loc
</code></pre>
And this <i>includes</i> the base userland (not just kernel)<p><a href="https://www.csoonline.com/article/564373/is-the-bsd-os-dying-some-security-researchers-think-so.html/amp/" rel="nofollow noreferrer">https://www.csoonline.com/article/564373/is-the-bsd-os-dying...</a>
"Of course, much of that is auto-generated header files... A large portion of it with AMD continuing to introduce new auto-generated header files with each new generation/version of a given block. These verbose header files has been AMD's alternative to creating exhaustive public documentation on their GPUs that they were once known for."<p>So what's the point of saying that it's large?
Autogenerated code aside, I find that the vast majority of programmers are simply incapable of writing concise and straightforward code. They instead appear to love complexity, creating tons more abstractions and indirections than necessary. Not too long ago I wanted to figure out how to use the basic 2D acceleration (blitter) feature on Nvidia's GPUs, and looked into the Nouveau driver. Despite the fact that I already had a general idea of the command submission process and queues etc., following the codepath from the top-level "copy this rectangle from here to there" function down to the hardware registers felt ridiculously long-winded, although the ultimate actions were very very simple: write the command and its parameters to a circular queue, and tell the GPU to execute it by updating the queue registers.
I am working on the kernel right now, the code is very pleasant (as far as C code goes) to work with.<p>Whereas I worked on Chrome's V8 C++ code for a year and I still could not say I understand more than half of it. Its complexity is a factor more than the Linux kernel.
It's not completely clear from the article, but: are the files generated 'on-the-fly' during the build process (and therefore not in git), or generated once (by AMD), and then committed?
Corporations don't incentivize good engineering, they incentivize functionality at any cost. This leads to giant codebases, over-engineering, bad engineering, fragility, unmaintainable, useless code, and duplication. The FOSS/FLOSS community must push back against the hot mess turds corporations want to dump into their source.
So while the AMD driver is open source, the community is basically excluded from contributing?<p>Should someone decided that they'd start working through the code, removing duplicate code and clean up headers, functions and abstraction, they work would either be rejected, or undone with the next AMD code dump?
... and it doesn't work right. When you start googling for your syslog entries you find countless reports spanning many kernel versions of identical looking crashes, likely with different root causes since all the message basically says is "the GPU hung".
Wouldn't it be possible to move most of this code out of the kernel? I'm not sure what's in it, but my guess what you actually need to have in the kernel is buffer allocation, memory protection and command submission code, and some modesetting/graphics display specific bits so you can display some basic graphics without the userland.
Why are GPU drivers baked into the kernel?<p>Wouldn’t it be better to load them in such a way that a crash in the GPU driver can be recovered from as opposed to crashing the whole system?<p>Other operating systems load the GPUs drivers separately.
This could be expressed in binary format using way less space, but expressing it in code / text I suppose make it more suitable to call it a source.
Does a graphical representation of the files in the Linux kernel exist anywhere? Like a graphical file explorer but for the different kernel components.