Rather than just adding more bits - doubles and double-doubles, I like the idea of moving around the boundary between the exponent and mantissa as necessary for the application, as well as adding the ability to determine how uncertain the number actually is.<p><a href="http://sites.ieee.org/scv-cs/files/2013/03/Right-SizingPrecision1.pdf" rel="nofollow">http://sites.ieee.org/scv-cs/files/2013/03/Right-SizingPreci...</a><p>www.amazon.com/The-End-Error-Computing-Computational/dp/1482239868
Still, nearly no progress in fast decimals, which are extremely important in financial applications.<p>I'd even say that the only place where floating point is necessary is in simulations (physics, 3D, analog signals — all of it should be properly done with GPUs.) Everything else (2D layouts, finance, data processing) is better served with either rationals or decimals.<p>We should remove floating point support from general-purpose CPUs and leave it to GPUs, where it belongs.
Is most of what you do numerical calculations? How much complexity are you willing to spend (in terms of development and maintanance programmer time) to buy a factor of 2 speedup to your code execution time in numerically heavy routines? Have you already optimised the hell out of those routines? Unless the answers are "yes", "a lot", and "yes", mixed precision arithmetic is not the answer for you.
> Arbitrary precision floating point arithmetic is available through [...] the core data type BigFloat in the new language Julia<p>Go 1.5 added a big.Float type.