Interesting article.<p>Other than as an exercise, it's not clear why someone would write a massively parallel <i>2D</i> renderer that needs a GPU. Modern GPUs are overkill for 2D.
Now, 3D renderers, we need all the help we can get.<p>In this context, a "renderer" is something that takes in meshes, textures, materials, transforms, and objects, and generates images. It's not an entire game development engine, such as Unreal, Unity, or Bevy. Those have several more upper levels above the renderer. Game engines know what all the objects are and what they are doing. Renderers don't.<p>Vulkan, incidentally, is a level below the renderer. Vulkan is a cross-hardware API for asking a GPU to do all the things a GPU can do. WGPU for Rust, incidentally, is an wrapper to extend that concept to cross-platform (Mac, Android, browsers, etc.)<p>While it seems you can write a general 3D renderer that works in a wide variety of situations, that does not work well in practice. I wish Rust had one. I've tried Rend3 (abandoned), and looked at Renderling (in progress), Orbit (abandoned), and Three.rs (abandoned). They all scale up badly as scene complexity increases.<p>There's a friction point in design here. The renderer needs more info to work efficiently than it needs to just draw in a dumb way. Modern GPSs are good enough that a dumb renderer works pretty well, until the scene complexity hits some limit. Beyond that point, problems such as lighting requiring O(lights * objects) time start to dominate. The CPU driving the GPU maxes out while the GPU is at maybe 40% utilization. The operations that can easily be parallelized have been. Now it gets hard.<p>In Rust 3D land, everybody seems to write My First Renderer, hit this wall, and quit.<p>The big game engines (Unreal, etc.) handle this by using the scene graph info of the game to guide the rendering process.
This is visually effective, very complicated, prone to bugs, and takes a huge engine dev team to make work.<p>Nobody has a good solution to this yet. What does the renderer need to know from its caller? A first step I'm looking at is something where, for each light, the caller provides a lambda which can iterate through the objects in range of the light. That way, the renderer can get some info from the caller's spatial data structures. May or may not be a good idea. Too early to tell.<p>[1] <a href="https://github.com/linebender/vello/" rel="nofollow">https://github.com/linebender/vello/</a>