Gustafson gives a compelling argument, especially for low precision like 8b and 16b. In 8b, operations can be implemented with table lookup with FPGAs. Unums are better than IEEE but can't an application specific choice do better given that we're talking about choosing 256 numbers?
This was more interesting than I expected. Who knew that many architectures have a flag that indicates that a float values is inexact, but that no language exposes that to programmers. Or that IEEE 754 isn't really a standard as much as it is a set of guidelines.<p>He makes a compelling argument for why his proposal of the ubit/posit represents mathematical truthful statements, while floating point lies to you. The tradeoffs make a lot of sense. No more overflow/underflow. Better closure under arithmetical operations.<p>“Floating point numbers are like piles of sand; every time you move them around, you lose a little sand and pick up a little dirt.” -- Brian Kernighan
What's really needed is an error term that can be computed in parallel with your normal floating point operation. Floating point is impossible to write correctly in practice if you don't consider the accumulated error.