Because, of course, floating point addition and multiplication is not associative. This turns out <i>surprisingly</i> easy to demonstrate:<p><pre><code> 0.1 + (0.2 + 0.3) = 0.6
0.1 + 0.2 + 0.3 = 0.6000000000000001
</code></pre>
and the same for multiplication:<p><pre><code> 0.1 * 0.2 * 0.3 = 6.000000000000001e-3
0.1 * (0.2 * 0.3) = 6.0e-3
</code></pre>
It actually isn't "surprising" if you understand how the format works. It essentially uses scientific notation but in binary, with a set number of bits for both the mantissa and the exponent as well as a few changes thrown in for better behavior at its limits (like denormalization). This means that it can't directly express numbers which are very easy to write in decimal form, like 0.1, just like we can't express 1/3 as a finite decimal. It's designed to manage this as well as possible with the small number of bits at its disposal, but we still inevitably run into these issues.<p>Of course, most programmers only have a vague idea of how floating point numbers work. (I'm certainly among them!) It's very easy to run into problems. And even with a better understanding of the format, it's still very difficult to predict exactly what will happen in more complex expressions.<p>A really cool aside is that there are some relatively new toys we can use to model floating point numbers in interesting ways. In particular, several SMT solvers including Z3[1] now support a "theory of floating point" which lets us exhaustively verify and analyze programs that use floating point numbers. I haven't seen any applications taking advantage of this directly, but I personally find it very exciting and will probably try using it for debugging the next time I have to work with numeric code.<p>A little while back, there was an article about how you can test floating point functions by enumerating every single 32-bit float. This is a great way of thinking! However, people were right to point out that this does not really scale when you have more than one float input or if you want to talk about doubles. This is why SMT solvers supporting floating point numbers is so exciting: it makes this sort of approach practical even for programs that use lots of doubles. So you <i>can</i> test every single double or every single pair of doubles or more, just by being clever about how you do it.<p>I haven't tried using the floating point theories, so I have no idea how they scale. However, I suspect they are not significantly worse than normal bitvectors (ie signed/unsigned integers). And those scale really well to larger sizes or multiple variables. Assuming the FP support scales even a fraction as well, this should be enough to practically verify pretty non-trivial functions!<p>[1]: <a href="http://z3.codeplex.com/" rel="nofollow">http://z3.codeplex.com/</a>
Always a great read: "What Every Computer Scientist Should Know About Floating-Point Arithmetic"<p>PDF rendering: <a href="http://www.cse.msu.edu/~cse320/Documents/FloatingPoint.pdf" rel="nofollow">http://www.cse.msu.edu/~cse320/Documents/FloatingPoint.pdf</a><p>HTML rendering: <a href="http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html" rel="nofollow">http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.ht...</a>
One thing I always wondered about is <i>why</i> are we using floating point arithmetic at all, instead of fixed point math with explicitly specified ranges (say, "here I need 20 bits for the integer part and 44 for the fractional part")? What is the practical value of having a floating point that would justify dealing with all that complexity and conceptual problems they introduce?
> Um... you know that a<i>a</i>a<i>a</i>a<i>a and (a</i>a<i>a)</i>(a<i>a</i>a) are not the same with floating point numbers, don't you?<p>Why would so many people upvote such a condescending comment?
The non associativity of addition is obvious, but for the multiplication, I understand why it does not give always the same answer, but I do not see why a change of the order could change the accuracy.
There is a discussion from today and yesterday on llvm-dev that deals with floating point guarantees and optimizations: <a href="http://thread.gmane.org/gmane.comp.compilers.clang.devel/35858/" rel="nofollow">http://thread.gmane.org/gmane.comp.compilers.clang.devel/358...</a>
floating point in short: computer representation of scientific notation(1) with sign bit, exponent(2) and coefficient(3) crammed in the same word.<p>1. <a href="http://en.wikipedia.org/wiki/Scientific_notation" rel="nofollow">http://en.wikipedia.org/wiki/Scientific_notation</a><p>2. base 2, biased by 127, 8 bits (IEEE float)<p>3. base 2, implicit leading 1, 23 bits (IEEE float)<p><a href="http://en.wikipedia.org/wiki/Single-precision_floating-point_format" rel="nofollow">http://en.wikipedia.org/wiki/Single-precision_floating-point...</a>