Because, of course, floating point addition and multiplication is not associative. This turns out <i>surprisingly</i> easy to demonstrate:<p><pre><code> 0.1 + (0.2 + 0.3) = 0.6
0.1 + 0.2 + 0.3 = 0.6000000000000001
</code></pre>
and the same for multiplication:<p><pre><code> 0.1 * 0.2 * 0.3 = 6.000000000000001e-3
0.1 * (0.2 * 0.3) = 6.0e-3
</code></pre>
It actually isn't "surprising" if you understand how the format works. It essentially uses scientific notation but in binary, with a set number of bits for both the mantissa and the exponent as well as a few changes thrown in for better behavior at its limits (like denormalization). This means that it can't directly express numbers which are very easy to write in decimal form, like 0.1, just like we can't express 1/3 as a finite decimal. It's designed to manage this as well as possible with the small number of bits at its disposal, but we still inevitably run into these issues.<p>Of course, most programmers only have a vague idea of how floating point numbers work. (I'm certainly among them!) It's very easy to run into problems. And even with a better understanding of the format, it's still very difficult to predict exactly what will happen in more complex expressions.<p>A really cool aside is that there are some relatively new toys we can use to model floating point numbers in interesting ways. In particular, several SMT solvers including Z3[1] now support a "theory of floating point" which lets us exhaustively verify and analyze programs that use floating point numbers. I haven't seen any applications taking advantage of this directly, but I personally find it very exciting and will probably try using it for debugging the next time I have to work with numeric code.<p>A little while back, there was an article about how you can test floating point functions by enumerating every single 32-bit float. This is a great way of thinking! However, people were right to point out that this does not really scale when you have more than one float input or if you want to talk about doubles. This is why SMT solvers supporting floating point numbers is so exciting: it makes this sort of approach practical even for programs that use lots of doubles. So you <i>can</i> test every single double or every single pair of doubles or more, just by being clever about how you do it.<p>I haven't tried using the floating point theories, so I have no idea how they scale. However, I suspect they are not significantly worse than normal bitvectors (ie signed/unsigned integers). And those scale really well to larger sizes or multiple variables. Assuming the FP support scales even a fraction as well, this should be enough to practically verify pretty non-trivial functions!<p>[1]: <a href="http://z3.codeplex.com/" rel="nofollow">http://z3.codeplex.com/</a>