Author here - I imagine this is a bit too niche to get much traction on HN. There's a bit of discussion on bsky <a href="https://bsky.app/profile/pema99.bsky.social/post/3lotdtgowf22a" rel="nofollow">https://bsky.app/profile/pema99.bsky.social/post/3lotdtgowf2...</a>
This isn't my specialty, and ultimately it really doesn't matter to the core point of this good submission about how the GPU chooses mipmap level to use, however the article gives the impression that we pre-calculate mipmap levels to improve distant aliasing, though the problem they demonstrate is solved with trivial texture filtering.<p>Mipmaps are a performance optimization[1]. You could just use a 4096x4096 brick texture across your entire game, and then use texture filtering to make it look good both close and far, but that means that rendering a distant wall polygon that might fill just a few pixels of the viewport needs to filter and apply a 16.7 million texel texture, redoing the filtering again and again and evicting everything else from caches just for that one texture. If instead it can apply a 32x32 pre-filtered texture to loads of distant objects, there are obviously massive performance ramifications. Which is why mipmaps are used, letting massive textures be used for those cases where the detail is valuable, without destroying performance when it's just some distant object.<p>And of course modern engines do the same thing with geometry now, where ideally there is hierarchy of differing level of detail geometry and it will choose the massive-vertices object when it fills the scene, and the tiny, super optimized one when it's just a few pixels.<p>[1] As one additional note, all major graphics platforms can automatically generate mipmaps for textures...but only if the root is uncompressed. Modern texture compression is hugely compute bound and yields major VRAM savings so almost all games pre-compute the mipmapping and then do the onerous compression in advance.
<p><pre><code> "You couldn’t implement these functions yourself - they are magic intrinsics which are implemented in hardware"
</code></pre>
But why?
Insane deep-dive! Framing texture sampling as "Ideally, we’d like to integrate over the projection of the screen pixel onto the texture" was enlightening for me. I particularly enjoyed the explanation of anisotropic filtering because it always seemed like magic to me, and in the context of aligning ellipses on textures it just makes sense :D
I didn't even read what these circle images mean, but it's fun to see that AMD and Adreno look the same... because Adreno is AMD / ATI's old mobile architecture that was sold off a long time ago (and an anagram of Radeon).