> <i>Huffman coding tries to compress text one letter at a time on the assumption that each letter comes from some fixed and known probability distribution. If the algorithm is successful then we'd expect the compressed text to look like a uniformly distributed sequence of bits. If it didn't then there'd be patterns that could be used for further compression.</i><p>This can be gently confusing when you're using different compression systems, (bits vs bytes)<p>(<a href="https://groups.google.com/d/topic/lz4c/DcN5SgFywwk/discussion" rel="nofollow">https://groups.google.com/d/topic/lz4c/DcN5SgFywwk/discussio...</a>)<p>Someone is compressing very large log files. They then compressed the output, and got further reductions in size.<p>> <i>The fundamental reason is that these highly repetitive byte sequences, with very small and regular differences, produce repetitive compressed sequences, which can therefore be compressed further.</i> - Yann Collet