One thing I think would be nice for floating point numbers, is that I'd prefer if there were two separate types - one where NaN and the two infinites are allowed, and one where they are not allowed but instead emit an error. The former would be used by some few mathematicians etc, and the rest of us could use the latter. The upside would be better error handling close to the source of the issue, and better optimizations as the not-normal values throw a wrench into optimizing math.
Posits <a href="https://posithub.org/docs/Posits4.pdf" rel="nofollow">https://posithub.org/docs/Posits4.pdf</a> are an excellent perspective for an alternative to IEEE floats.
Not sure why the article does not reference the following paper which is a must read for anyone working with floating point: <a href="https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html" rel="nofollow">https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...</a> (original: <a href="https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf" rel="nofollow">https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf</a>).
A really interesting review. The idea of relative error makes sense in most cases, but when we need to do subtraction and difference matters, maybe absolute error is actually better.