The big issue here is what you're going to use your numbers for. If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.<p>If you're handling money, or numbers representing some other real, important concern where accuracy matters, most likely any number you intend to show to the user as a number, floats are not what you need.<p>Back when I started using Groovy, I was very pleased to discover that Groovy's default decimal number literal was translated to a BigDecimal rather than a float. For any sort of website, 9 times out of 10, that's what you need.<p>I'd really appreciate it if Javascript had a native decimal number type like that.
MS Excel tries to be clever and disguise the most common places this is noticed.<p>Give it =0.1+0.2-0.3 and it will see what you are trying to do and return 0.<p>Give it anything slightly more complicated such as =(0.1+0.2-0.3) and this won't trip, in this example displaying 5.55112E-17 or similar.
I remember in college when we learned about this and I had the thought, "Why don't we just store the numerator and denominator?", and threw together a little C++ class complete with (then novel, to me) operator-overloads, which implemented the concept. I felt very proud of myself. Then years later I learned that it's a thing people actually use: <a href="https://en.wikipedia.org/wiki/Rational_data_type" rel="nofollow">https://en.wikipedia.org/wiki/Rational_data_type</a>
A thread from 2017.00000000000: <a href="https://news.ycombinator.com/item?id=14018450" rel="nofollow">https://news.ycombinator.com/item?id=14018450</a><p>2015.000000000000: <a href="https://news.ycombinator.com/item?id=10558871" rel="nofollow">https://news.ycombinator.com/item?id=10558871</a>
Also the subject of one of the most popular questions on StackOverflow: <a href="https://stackoverflow.com/q/588004/5987" rel="nofollow">https://stackoverflow.com/q/588004/5987</a>
While it's true that floating point has its limitations, this stuff about not using it for money seems overblown to me. I've worked in finance for many years, and it really doesn't matter that much. There are de minimis clauses in contracts that basically say "forget about the fractions of a cent". Of course it might still trip up your position checking code, but that's easily fixed with a tiny tolerance.
That's one of the worst domain name ever. When the topic comes along, I always remember about "that single-serving website with a domain name that looks like a number" and then take a surprisingly long time searching for it.<p>I have written a test framework and I am quite familiar with these problems, and comparing floating point numbers is a PITA. I had users complaining that 0.3 is not 0.3.<p>The code managing these comparisons turned out to be more complex than expected. The idea is that values are represented as ranges, so, for example, the IEEE-754 "0.3" is represented as ]0.299~, 0.300~[ which makes it equal to a true 0.3, because 0.3 is within that range.
This is a good thing to be aware of.<p>Also the "field" of floating point numbers is not commutative†, (can run on JS console:)<p>x=0;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; }; x+=1<p>--> 1.000000000000001<p>x=1;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };<p>--> 1<p>Although most of the time a+b===b+a can be relied on. And for most of the stuff we do on the web it's fine!††<p>† edit: Please s/commutative/associative/, thanks for the comments below.<p>†† edit: that's wrong! Replace with (a+b)+c === a+(b+c)
I feel like it should really be emphasised that the reason this occurs is due to a mismatch between binary exponentiation and decimal exponentiation.<p><i>0.1 = 1 × 10^-1</i>, but there is no integer significand <i>s</i> and integer exponent <i>e</i> such that <i>0.1 = s × 2^e</i>.<p>When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some <i>10^x</i> divisor). If you change the base, you solve the problem of representing <i>0.1</i>, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent <i>1/10</i> just as base 10 fails to represent <i>1/3</i>. All you're doing by using something based around the number <i>10</i> is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.<p>Also, binary-coded decimal is irrelevant. The thing you're wanting to change is <i>which</i> base is used, not how any integers are represented in memory.
One small tip about printf for floating point numbers. In addition to "%f", you can also print them using "%g". While the precision specifier in %f refers to digits after the decimal period, in %g the precision refers to the number of significant digits. The %g version is also allowed to use exponential notation, which often results in more pleasant-looking output than %f.<p><pre><code> printf("%.4g", 1.125e10) --> 1.125e+10
printf("%.4f", 1.125e10) --> 11250000000.0000</code></pre>
One of my favorite things about Perl 6 is that decimal-looking literals are stored as rationals. If you actually want a float, you have to use scientific notation.<p>Edit: Oh wait, it's listed in the main article under Raku. Forgot about the name change.
That’s only formatting.<p>The other (and more important) matter, — that is not even mentioned, — is comparison. E. g. in “rational by default in this specific case” languages (Perl 6),<p><pre><code> > 0.1+0.2==0.3
True
</code></pre>
Or, APL (now they are floats there! But comparison is special)<p><pre><code> 0.1+0.2
0.3
⎕PP←20 ⋄ 0.1+0.2
0.30000000000000004
(0.1+0.2) ≡ 0.3
1</code></pre>
The runner up for length is FORTRAN with: 0.300000000000000000000000000000000039<p>And the length (but not value) winner is GO with: 0.299999999999999988897769753748434595763683319091796875
> It's actually pretty simple<p>The explanation then goes on to be very complex. e.g. "it can only express fractions that use a prime factor of the base".<p>Please don't say things like this when explaining things to people, it makes them feel stupid if it doesn't click with the first explanation.<p>I suggest instead "It's actually rather interesting".
Postgresql figured this out many years ago with their Decimal/Numeric type. It can handle any size number and it performs fractional arithmetic perfectly accurately - how amazingly for the 21st Century! Is comically tragic to me that all of the mainstream programming languages are still so far behind, so primitive that they do not have a native accurate number type that can handle fractions.
I still remember when I encountered this and nobody else in the office knew about it either. We speculated about broken CPUs and compilers until somebody found a newsgroup post that explained everything. Makes me wonder why we haven't switched to a better floating point model in the last decades. It will probably be slower but a lot of problems could be avoided.
In JavaScript, you could use a library like decimal.js. For simple situations, could you not just convert the final result to a precision of 15 or less?<p><pre><code> > 0.1 + 0.2;
< 0.30000000000000004
> (0.1 + 0.2).toPrecision(15);
< "0.300000000000000"
</code></pre>
From Wikipedia: "If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string." --- <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow">https://en.wikipedia.org/wiki/Double-precision_floating-poin...</a>
That is why I only used base 2310 for my floating point numbers :-). FWIW there are some really interesting decimal format floating point libraries out there (see <a href="http://speleotrove.com/decimal/" rel="nofollow">http://speleotrove.com/decimal/</a> and <a href="https://github.com/MARTIMM/Decimal" rel="nofollow">https://github.com/MARTIMM/Decimal</a>) and the early computers had decimal as a native type (<a href="https://en.wikipedia.org/wiki/Decimal_computer#Early_computers" rel="nofollow">https://en.wikipedia.org/wiki/Decimal_computer#Early_compute...</a>)
This is part of the reason Swift Numerics is helping to make it much nicer to do numerical computing in Swift.<p><a href="https://swift.org/blog/numerics/" rel="nofollow">https://swift.org/blog/numerics/</a>
This is a great shibboleth for identifying mature programmers who understand the complexity of computers, vs arrogant people who wonder aloud how systems developers and language designers could get such a "simple" thing wrong.
Interesting, I searched for "1.2-1.0" on google. The calculator comes up and it briefly flashes 0.19999999999999996 (and no calculator buttons) before changing to 0.2. This happens inconsistently on reload.
Swi-Prolog (listed int he article) also supports rationals:<p><pre><code> ?- A is rationalize(0.1 + 0.2), format('~50f~n', [A]).
0.30000000000000000000000000000000000000000000000000
A = 3 rdiv 10.</code></pre>
This specific issue nearly drove me insane trying to debug a SQL -> C++/Scala/OCaml transpiler years ago. We were using the TPC-H benchmark as part of our test suite, and (unbeknownst to me), the validation parameters for one of the queries (Q6) triggered this behavior (0.6+0.1 != 0.7), but only in the C/Scala targets. OCaml (around which we had built most of our debugging infrastructure) handled the math correctly...<p>Fun times.
I wish high level languages (specifically python) would default to using decimal, and only use a float when cast specifically. From what I understand that would make things slower, but as a higher level language you're already making the trade of running things slower to be easier to understand.<p>That said, it's one of my favorite trivia gotchas.
Fixed-point calculations seem to be somewhat of a lost art these days.<p>It used to be widespread because floating point processors were rare and any floating point computation was costly.<p>That's not longer the case and everyone seems to immediately use floating point arithmetic without being fully aware of the limitations and/or without considering the precision needed.
As soon as I've started developing real-life business apps I've started to dream about a POWER which is said to have hardware decimal type support. Javs's BigDecimal solves the problem on x86 but it is at least an order of magnitude more slow than FPU-accelerated types.
Not surprisingly Common Lisp gets it right. I don’t mean this is snark (I don’t mean to imply you are a weenie if you don’t use lisp) but just to show that it picked a different kind of region in the language design domain.
Computer languages should default to fixed precision decimals and offer floats with special syntax (eg “0.1f32”).<p>The status quo is that even Excel defaults to floats and wrong calculations with dollars and cents are widespread.
The thing that surprised me the most (because I never learned any of this in school) was not just the lack of precision to represent some numbers, but that precision falls off a cliff for very large numbers.
TL;DR - 0.1 in Base 2 (binary) is the equivalent of 1/3 in Base 10 meaning, it’s a repeating decimal that causes rounding issues (0.333333 repeating)<p>This is why you should never do “does X == 0.1” because it might not evaluate accurately
This has been posted here many times before.
It even got mocked on n-gate in 2017
<a href="http://n-gate.com/hackernews/2017/04/07/" rel="nofollow">http://n-gate.com/hackernews/2017/04/07/</a>
Please check some of the online papers on Posit numbers and Unum computing, especially by John Gustafson. In general, Unums can represent more numbers, with less rounding, and fewer exceptions than floating points. Many software and hardware vendors are starting to do interesting work with Posits.
IEEE floating-point is disgusting. The non-determinism and illusion of accuracy is just wrong.<p>I use integer or fixed-point decimal if at all possible. If the algorithm needs floats, I convert it to work with integer or fixed-point decimal instead. (Or if possible, I see the decimal point as a "rendering concern" and just do the math in integers and leave the view to put the decimal by whatever my selected precision is.)