Keith Packard is a big contributor to Linux graphics, a recent blog post notes:<p>> So, you've got a fine head-mounted display and want to explore the delights of virtual reality. Right now, on Linux, that means getting the window system to cooperate because the window system is the DRM master and holds sole access to all display resources. So, you plug in your device, play with RandR to get it displaying bits from the window system and then carefully configure your VR application to use the whole monitor area and hope that the desktop will actually grant you the boon of page flipping so that you will get reasonable performance and maybe not even experience tearing. Results so far have been mixed, and depend on a lot of pieces working in ways that aren't exactly how they were designed to work.<p>His blog has some recent updates. He is consulting for Valve, working on VR.<p><a href="https://keithp.com/blogs/DRM-lease/" rel="nofollow">https://keithp.com/blogs/DRM-lease/</a>
There are few things I hate with more passion than I do the Linux "Graphics Stack." I realize that it inherited a toxic culture from UNIX (remember the X/Motif/xNews/Suntools/Etc wars? I do.) and even today there is active warfare in the stack Wayland vs Xorg, GTK vs QT, Gnome vs KDE vs *desktop, HW Vendors vs FOSS, OpenGL vs OpenGL, Etc.<p><rant> A great example of this is how broken the simplest of things can be. I've got sitting next to me machine which refused to boot into a "good" graphics configuration, it boots with a 'safe' default of 1024 x 800 screen (where the actual screen is a 2K screen) even though it has HDMI (which can tell it all that it needs to know.) Using a widely deployed nVidia card and the current 'non-free-non-FOSS' nVidia graphics stack. Re-install the driver package, and it works (until reboot). Yes, there is a knob somewhere that screws it up, there are a billion knobs, from a dozen layers, is it DRM/KMS? is it Lightdm? is it Nvidia? Is it OpenGL? Is it a friggin boot option buried in the boot args? It can be anywhere and its going to take me a few hours to figure out where it is. Which in terms of wasted salary time is enough for me to buy a Mac Pro or a Windows box which works every time all the time. </rant>
There is another great resource mainly about X11 called Xplain[1] with its accompanying repository[2] with some hidden (not yet ready I suppose) chapters. For example the override-redirect description[3].<p>[1]: <a href="https://magcius.github.io/xplain/article/" rel="nofollow">https://magcius.github.io/xplain/article/</a><p>[2]: <a href="https://github.com/magcius/xplain" rel="nofollow">https://github.com/magcius/xplain</a><p>[3]: <a href="https://magcius.github.io/xplain/article/menu.html" rel="nofollow">https://magcius.github.io/xplain/article/menu.html</a>
I don't like these slides. It's not that they're terribly wrong. But they gloss over some of the really important aspects. And they make actually rather simple problems appear harder than they are.<p>Take for example slide 29. This slide suggests, that off-screen redirection of OpenGL applications (as required for composition) is something special and needs to be treated differently than non-OpenGL graphics. This is simply not true. If OpenGL is used in a <i>window system integrated</i> context (WSI is a rather new term, which has been properly defined only recently, but the principle has been the same since the beginning; what's new is, that since OpenGL-3 you can use a GL context <i>without</i> WSI) the window framebuffer is <i>not</i> managed by the OpenGL implementation, but whatever windowing system is used (e.g. X11, Win32 GDI, etc.) and the OpenGL implementation just borrows that. And the same mechanism (and incidently code paths) that allow to use a WSI drawable as rendering destination for OpenGL also allows to turn the flow of data around and use a WSI drawable as source for texture access. Somewhere at the bottom it's just pointers to regions of graphics memory after all.<p>It's a pretty simple process, actually, and the only complicated thing are the weirdly convoluted APIs that have grown around it, to expose something that has always been there, but had been hidden from applications before. But consider how quickly AIGLX was hacked together after Xgl showed up. There have been, IIRC, just a couple of months between.<p>That's the main insight that lead to the Wayland project. Do away with the API cruft and expose the one thing that has been possible all the time anyway.<p>Oh, and it should maybe also pointed out that GLX_EXT_texture_from_pixmap is useful for so much more than just composition, and can be used without Composite redirection.<p>And then there's slide 13, which is simply wrong in stating that there was a time when "Indirect Rendering (…) didn't allow for hardware acceleration." That's not what it implies. It just implies, that there is no fast path between application process and graphics hardware, which slowed down data transfers. But display lists back then were a staple of OpenGL and they did offer and for the legacy code that uses them still do offer excellent performance (it actually took some time for OpenGL drivers Buffer Object based Vertex Array drawing code paths to catch up with display list performance). And one could use display lists over indirect GLX just fine (and actually there ARB_vertex_buffer_object extension defines GLX opcodes, so you can even have that over indirect contexts, too).