Psychoacoustic masking was the premise of Sony's ATRAC compression scheme for audio (used almost exclusively on the Minidisc format).<p>Today it'd get a press release describing that as AI no doubt...
In the example image the compression seems more noticeable on some parts of the image than others, though this is probably due in part to my imagination. It gives me a wacky (far-future) idea, though; what about combining a NN-based compression system with data on eye tracking and attention, to vary the level of detail in an image depending on salience?
This is super-interesting as a facet of the "Intelligence is Compression" model. It's tempting to anthropomorphize and say that the system is building an opinion about what an image, fundamentally, _is_. I'm inclined to believe that these kinds of compressive abstractions are integral to higher-level reasoning. Could you build a system with even better behavior, for example, if you included text snippets describing the images, and a multi-modal model?<p>I'd be interested to see an analysis of the behaviors of this system compared to more generative efforts, like autoencoders or Deep Dream.
It's also been discussed that they're using WebP instead of the older JPG / PNG formats in Chrome Store / Hangouts / etc.<p><a href="https://techcrunch.com/2013/02/07/google-now-uses-its-own-webp-format-instead-of-pngs-in-the-chrome-web-store/" rel="nofollow">https://techcrunch.com/2013/02/07/google-now-uses-its-own-we...</a>