Various corrections:<p>The article says that ternary uses exponentially fewer symbols (actually, in further nitpicking, it says "bits" rather than "symbols") than binary, but that's not correct. The decrease is just linear, by a factor of log_2(3).<p>The comment about subtraction being a lot easier doesn't really make a lot of sense in light of 2's-complement notation; yes, it's a <i>tiny</i> bit easier, but... (the comments about sign bits also seem a bit out of place in this light).
The 2 of 5 encoding reminded me of "8 to 10" codes [1], which (as far as know) is still in use by disk and tape drives. It's for a slightly different reason than the 2 of 5 code but it's roughly the same idea.<p>[1] <a href="https://en.wikipedia.org/wiki/8b/10b_encoding" rel="nofollow">https://en.wikipedia.org/wiki/8b/10b_encoding</a>
> (FYI I’m going to reverse the conventional order so that the 2⁰ is the left most throughout this post)<p>Um, why would you break (or rather worse, <i>invert</i>) such a common convention?<p>> So 11 is 9+3–1.<p>...and not follow through, just two sentences later?<p>> [...] at which point the number 11 would be 2-0-1 (1+1+9).<p>Surely you mean 2+0+9 or 9+0+2?
> POSTNET (the old barcode system the Post Office used to route mail up until a few years ago)<p>Oh, are these not in use anymore?<p>> The first group had two bits, one representing the number 0 and the other representing the number 5. The second group had five bits representing the numbers 0–4.<p>Sounds like an abacus to me…
The Harwell WITCH used Dekatrons, which is a device that has 10 states and can therefore store one decimal digit each.<p><a href="https://www.youtube.com/watch?v=vVgc8ksstyg" rel="nofollow">https://www.youtube.com/watch?v=vVgc8ksstyg</a>
I wonder if bi-quinary decimal influenced the later Packed Decimal (known as COMP-3 in COBOL), which stored numbers in just over half the space of the text equivalent (e.g. a 7-digit number would need 4 bytes).