Initially I was assuming they were not including the Huffman encoding step, but no:<p><i>The bytes in the files do not have consistent meanings and would depend on their context and the implicit Huffman tables. [...]</i><p><i>However, we observe that conventional, vanilla language modeling surprisingly conquers these challenges without special designs as training goes (e.g., JPEG-LM generates realistic images barely with any corrupted JPEG patches).</i><p>That surprised me, but then I'm not in the field.