It reminds me of schemes like <a href="https://en.wikipedia.org/wiki/Elias_delta_coding" rel="nofollow">https://en.wikipedia.org/wiki/Elias_delta_coding</a> , <a href="https://en.wikipedia.org/wiki/Elias_omega_coding" rel="nofollow">https://en.wikipedia.org/wiki/Elias_omega_coding</a>
It's a neat encoding but the writeup is needlessly confusing. The introduction would <i>really</i> benefit from a graph or two. Or even just a textual sequence showing how the value evolves as you iterate on it.<p>Similarly, while the end result is quite elegant the consequences of the choices made to get there aren't really explained. It's quite a bit of extra effort to reason out what would have happened if things had been done differently.
This feels like you can use e (or I guess any number) for the logarithm base rather than 2. I wonder if that’d make it a bit more uniform in some sense.
> Any value representable in n bits is representable in n+1 bits<p>> [1 1 1 1 1 1 1] 2.004e+19728<p>Does that mean that the 8 bits version has numbers larger than this? Doesn't seem very useful for 10^100 is already infinity for all practical purposes.
holy Father! this is amazing, the things a human mind can do, ask an LLM for that and it'll give you 10 more complicated and unmaintainable ways to do it.