This is a terrible idea. This is a catastrophically bad idea. How do you compare numbers? How do you figure out that 2.2 is less than exp(65)? You have to represent these as numbers at some point in the calculation, and in order to do that, you're probably going to be using either floating point or fixed point, which means it's still trivially possible to construct an error case that suffers the exact same problems normal floating point numbers have. Observe:<p><pre><code> x=1
for(i=0;i<n;++i)
x=0.01+sqrt(x)
</code></pre>
You can't simplify this, so it will simply loop until it hits the datatype boundary, and then get rounded into oblivion, because the underlying floating point representation will break in the exact same way. The only way to get rid of this would be to use an arbitrary number of bits for precision, in which case... just use an arbitrary precision library instead! This is <i>EXACTLY WHY WE HAVE THEM</i>.<p>Most equations that aren't trivial cannot be simplified. This datatype would only be useful for spoon-fed high school math problems. Furthermore, it costs so much CPU to perform you might as well just use an arbitrary precision library, which will probably be faster and be just as effective at preventing edge cases.<p><a href="https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic" rel="nofollow">https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic</a>
> Floating point datatype is the de-facto standard for real world quantities. Whether it is a banking account [...]<p>Is there any bank which uses floating point for accounting?
related: there is a short series of exercises in SICP that explore the idea of building an interval arithmetic library. i.e. numerical values are represented by intervals [a, b] which encode their uncertainty/error:<p><a href="http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-14.html#%_sec_2.1.4" rel="nofollow">http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-14.html...</a><p>Exercise 2.14 and onwards point out that two expressions that are algebraically equivalent for perfectly accurate values stop being equivalent once we introduce uncertainty. This is only for a toy example expression with 2 variables. Suppose we want to solve a linear system of equations in 100s of thousands of variables. Is it tractable to track the uncertainty for all of those variables at once? Will their uncertainties be dependent? ...
Possibly of interest: the Frink programming language has arbitrary precision math and unit of measurement tracking, and has been around for more than a decade.<p><a href="http://futureboy.us/frinkdocs/index.html" rel="nofollow">http://futureboy.us/frinkdocs/index.html</a>
Interesting take on floating point problem, but it seems as if writer isn't well versed on the centuries old solution of this problem, namely refinement calculations for lin algebra and more general iterated numerical methods for nonlinear systems -- and those are the places where the precision matters, where you are trying to calculate a figure with a given accuracy.<p>Note however, that solutions to many problems may in a sense 'non-analytic', there may be no finite set of elementary functions on a given rational number which yields the solution.<p>Also, iterative answers are usually the only viable way to reach solutions, they're usually much faster than the exact solution (or the floating point precision limited solution), and you can always control how good your solution is.<p>Observation: So in a sense what is practically used may indeed very close to the Kolmogorov complexity of the solutions - the representation as R=IterativeProblemSolve(Problem,ClosestSolution), where we publish the problem and the desired solution! (assumig we are efficiently describing the problem)
Floating point is not so bad. Yes, when you need fixed precision, such as in accounting, you should avoid it. But with numerical computations, if you run into problem because of float limitations, you're doing it wrong.<p>The first rule of numerics is not to compare variables of very different magnitude. His first example has coefficients that differ by almost 200 orders. This example is totally outrageous, but still, what any reasonable person would do is introduce some scaling into the equation.<p>Yes, you have to think about scales, but you already do that. You never write programs with meters and seconds, you choose scaled dimensionless quantities. And unless you choose the scales very badly, you won't get any problems from floats.
Hard to comment without seeing more concrete examples of his approach. But it makes me wonder whether processors are now so fast that, for a large class of numerical problems, the default should be to give up calculational speed in return for eliminating some of the trickiness involved with floating point.
How about continued fractions? Unlike floating point, they can represent any rational number in finite memory. Using generators, you can even represent many irrational numbers, like square roots and pi, in finite memory.<p>Richard Schroeppel and Bill Gosper explored continued fractions in the '70s:<p><a href="http://www.inwap.com/pdp10/hbaker/hakmem/cf.html" rel="nofollow">http://www.inwap.com/pdp10/hbaker/hakmem/cf.html</a>
Maybe it's just me, but saying that something in current use in billions of computers around the globe is . . . somewhat of a stretch of the word "obsolete."
floating point will never be obsolete, it is a log scale datatype, and log scale datatypes represent most natural values perfectly.<p>The only place where the shoe doesn't fit is where you need a minimum accuracy. In that case what you should be doing is using integers to represent your minimum quantifiable unit.<p>For example, you could represent currency in millicents to give you respectably accurate rounding. Not accurate enough? Microcents then. Now you don't have enough range? Good, you're thinking about your requirements now, floating point DEFINITELY wouldn't of worked.