There's an interesting side point here: There exist (and there will be more of these in the future) data compression algorithms which are in general, <i>too slow</i> for specific software use-cases. (related: <a href="https://superuser.com/questions/263335/compression-software-with-slow-rates-and-very-high-compression" rel="nofollow">https://superuser.com/questions/263335/compression-software-...</a>).<p>The thing is -- they typically run too slow for their intended applications on current, consumer-grade CPU's...<p>But, could some of them be optimized to take advantage of GPUs (as Brotli is here) -- and would that then increase their performance to a level such that applications which previously could not use them because the algorithm previously took so long -- can now make use of them IF the software end-user has the proper GPU?<p>?<p>I think there's a huge amount of possibilities here...<p>Especially when you get to compression algorithms that include somewhat esoteric stuff, like Fourier Transforms, Wavelet Transforms -- and other weird and esoteric math algorithms both known and yet-to-be-discovered...<p>In other words, we've went far beyond Huffman and Lempel-Ziv for compression when we're in this territory...<p>(In fact, there should be a field of study... the confluence/intersection of all GPUs and all known compression algorithms... yes, I know... something like that probably already exists(!)... but I'm just thinking aloud here! <g>)<p>In conclusion, I think there's a huge amount of interesting future possibilities in this area...