I am currently participating to the specification of a binary format that includes variable-length integers.<p>I am wondering why VLQ/LEB128 [1] use base 128 instead of a more usual Base 64? Is this related to specific needs of where these formats come from?<p>A Base 64 could not be simpler to implement?<p>[1] https://en.wikipedia.org/wiki/LEB128
What is usual about base 64? Many variable length schemes use the first bit to code if this is the last byte, then use the remaining 7 bits to code numbers.<p>The counter example is schemes like UTF-8 where the prefix of the bitstream is 0, 10, 110, etc. and how many 1s there are tells how many extra bytes are used.