I see there being a number ofpaths for Neural Network compression.<p>She Simplest is a network with inputs of [X,Y] and outputs of {R,G,B] Where the image is encoded into the network weights. You have to per-image train the network. My guess is it would need large complex images before you could get compression rates comparable to simpler techniques.
An example of this can be seen at <a href="http://cs.stanford.edu/people/karpathy/convnetjs/demo/image_regression.html" rel="nofollow">http://cs.stanford.edu/people/karpathy/convnetjs/demo/image_...</a><p>In the same vein, you could encode video as a network of [X,Y,T] --> [R, G, B]. I suspect that would be getting into lifetime of the universe scales of training time to get high quality.<p>The other way to go is a neural net decoder. The network is trained to generate images from input data, You could theoretically train a network to do a IDCT, so it is also within the bounds of possibility that you could train a better transform that has better quality/compressibility characteristics. This is one network for all possible images.<p>You can also do hybrids of the above techniques where you train a decoder to handle a class of image and then provide a input bundle.<p>I think the place where Neural Networks would excel would be as a predictive+delta compression method.
Neural networks should be able to predict based upon the context of the parts of the image that have already been decoded.<p>Imagine a neural network image upscaler that doubled the size of a lower resolution image. If you store a delta map to correct any areas that the upscaler guesses excessively wrong then you have a method to store arbitrary images. Ideally you can roll the delta encoding into the network as well. Rather than just correcting poor guesses, the network could rank possible outputs by likelyhood. The delta map then just picks the correct guess, which if the predictor is good, should result in an extremely compressible delta map.<p>The principle is broadly similar to the approach to wavelet compression, only with a neural network the network can potentially go "That's an eye/frog/egg/box, I know how this is going to look scaled up"