Take the following as coming from a dilettante... I'm still trying to understand the remainder of the paper but felt like writing on the basics of the encoder/decoder/quantizer setup they mention.<p>I found this particularly interesting "To compress an image x ∈ X , we follow the formulation of [20, 8] where one learns an encoder E, a decoder G, and a finite quantizer q."<p>I feel like this is related to some of the standard human memorization/learning techniques. Example: I'm learning the guitar fretboard note placement in e standard. It's difficult for me to visualize the first 4 frets on a 6 string guitar with notes on each fret.<p>To help me memorize the note placement I develop various mnemonic devices (both lossy and lossless). I know I've memorized the fretboard sufficiently when I can visualize it.<p>Attempting to translate my reading of the paper I believe the following analogy is apt. My "encoder" operates on a short term image when I close my eyes after looking at a fret diagram. It produces semantic objects, i.e. an ordered sequence of "letters" or pairs of letters (letters that are horizontally, vertically or diagonally aligned). The quantizer takes these objects and looks at the order/distribution. The quantizer places more importance on some of the semantic objects than others (the fourth fret has 4 natural notes before an accidental). My decoder is interpreting the stored/compressed note information to try to produce the image. It may be off substantially, so I correct and repeat the process.<p>The process of optimizing what the semantic objects are, the weight each gets, and how I use them to derive the original image seems like a fairly good representation of what I do (though at least some of that appears to be fixed in the learning algorithm typically). Of course, analogies are just that and mine doesn't take into account the discriminator or the remaining "heart" of the paper.<p>I think the heart of the paper is that they're trying to determine through GANs a good way to both store the image and recover it while reducing bits per pixel and increasing the quality of reproduction. Using some classical terms, the GAN algorithm thus tweaks the compressor, the data storage format and the decompressor to optimize what should be "hard-coded" in the compressing/decompressing process or program vs what will be stored as a result of the compression program.<p>Very handwavey but I think the general idea is right?