This used to be considered cutting edge for image upscaling. Now, the results look hilariously bad compared to what deep learning upscalers produce.<p>The fractal based methods had a unique painting-like look, and edges remained relatively crisp. This is what it looked like: <a href="https://images-na.ssl-images-amazon.com/images/G/01/software/detail-page/gf-comparison.jpg" rel="nofollow">https://images-na.ssl-images-amazon.com/images/G/01/software...</a>
I know I'm in deep on wikipedia when the random links submitted to HN are already marked as clicked, but I barely recall seeing the article and I definitely didn't get to it the first time via HN.
Not sure why they just seemed to stop at IFS as the generating-formula.. I guess finding coefficients for it must be easier? Now (as sibling comment noted) neural-nets have totally owned this type of thing, but I reckon that there would now be ways to make neural-nets produce coefficients for more complex generating formulas, such that the compression ratio could be utterly insane. (Could be like the demoscene "4k intro" of the future?)
Twitter image coding challenge used this technique a while ago: <a href="https://stackoverflow.com/a/929360" rel="nofollow">https://stackoverflow.com/a/929360</a>