I'm struggling to understand what the point is. Video codecs don't store data in "pixel" format either, they store wavelet coefficients, data from the frequency domain.<p>From the article, the source data is in pixel format. No codec format can be better than the source data. And the output is displayed on a screen with pixels. Pixels in, pixels out. The intermediate format can only make things worse, not better. It's not surprising that they're able to do better than current formats: it sounds like they're using a huge amount of CPU for encoding. You could get better compression if you created a new MPEG4 profile that required more CPU too.<p>The intermediate format may be tweaked to provide better results while scaling, but you could probably add scaling hints to more conventional codecs, too. Not sure what the point would be either unless the source data had a lot more pixels than the output format...