> does not interpret the 0.1 as the real number<p>The focus should be on _rational_ numbers. This particular example is all about representation error - precision is implicated, but not the cause.<p>Ignore precision for a second: The inputs 0.1 and 0.2 are intended to be _rational_. This means they can be accurately represented finitely (unlike an irrational number like PI). Now when using <i>fractions</i> they can _always_ be accurately represented finitely in <i>any</i> base:<p><pre><code> 1/10=
base 10: 1/10
base 2: 1/1010
2/10=
base 10: 2/10
base 2: 10/1010
</code></pre>
The neat thing about rationals, is that when using the four basic arithmetic operations: two rational inputs will always produce one rational output :) this is relevant: 1/10 and 2/10 are both rationals, there is no fundamental reason that addition cannot produce 3/10. When using a format that has no representation error (i.e fractions) the output will be rational for all rational inputs (given enough precision, which is not a realistic issue in this case). When we add these particular numbers in our heads however, almost everyone uses decimals (base 10 floating point), and in <i>this particular case</i> that doesn't cause a problem, but what about 1/3?<p>This is the key: rationals cannot always be represented finitely in floating point formats, but this is merely an artifact of the format and the base. Different bases have different capabilities:<p><pre><code> 1/10=
base 10: 0.1
base 2: 0.00011001100110011r
2/10=
base 10: 0.2
base 2: 0.00110011001100110r
1/3=
base 10: 0.33333333333333333r
base 2: 0.01010101010101010r
</code></pre>
IEEE754 format is a bit more complicated than above, but this is sufficient to make the point.<p>If you can grok that key point (representation error), here's the real understanding of this problem:<p>Deception 1: The parser has to convert '0.1' decimal into base 2, which will cause the periodic significand '1001100110011' (not accurately stored at any precision)... yet when you ask for it back, the formater magically converts it to '0.1' why? because the parser and formater have symmetrical error :) This is kinda deceptive, because it makes it look like storage is accurate if you don't know what's going on under the hood.<p>Deception 2: Many combinations of arithmetic on simple rational decimal inputs also have rational outputs from the formatter, which furthers the illusion. For example, nether 0.1 or 0.3 are representable in base 2, yet 0.1 + 0.3 will be <i>formatted</i> to '0.4' why? It just happens that the arithmetic on those inaccurate representations added up to the same error that the parser produces when parsing '0.4', and since the parser and formatter produce symmetric error, the output is a rational decimal.<p>Deception 3: Most of us grew up with calculators, or even software calculator programs. All of these usually round display values to 10 significant decimals by default, which is quite a bit less than the max decimal output of a double. This always conceals any small representation errors output by the formatter after arithmetic on rational decimal inputs - which makes calculators look infallible when doing simple math.