I have to question the value of CS educations when a post of this nature pops up every couple of weeks, as if it is news that floating point arithmetic as implemented today, is by its nature an approximation, and that this is more noticeable the less bits you have to play with, when working with values that can't be represented neatly.<p>Financial arithmetic? Convert to smallest unit and use integers or the currency data type du jour in your language, and don't act surprised when operations on 32-bit floating point don't yield the intuitively correct values.<p>If you understood the representation format, you'd understand why.
Apple's Calculator app used to have an odd floating point rounding behavior (I filed a bug report and they fixed it):<p><pre><code> 14479.14
-
152.36
=
(result is 14326.78)
1143
/
78
=
(result is 14.6538461538461)
14479.14
-
152.36
=
(result is 14326.7799999999884)
</code></pre>
Note that the first and third calculations are the same, yet they resulted in different displayed results!<p>I never understood this bug. I understand floating point, so understand that some numbers are not exactly representable. However, it should at least still be consistent! The same calculation should give the same results every time.
Welcome to the best lectures of your life:<p><a href="http://webcast.berkeley.edu/course_details_new.php?seriesid=2010-B-26353&semesterid=2010-B" rel="nofollow">http://webcast.berkeley.edu/course_details_new.php?seriesid=...</a><p>also available via iTunes U. I'm currently listening to them on my commute. Note, if you do actually go through the whole course -- you'll need to listen to a different year for lecture 24 or so -- that one is skipped. One of the highlights of my day is actually coming home and re-looking up what he's talking about.<p>(Oh, and of course you can just listen to the two Floating Point lectures. It has to do with the non-uniform -- or at least non-linearly uniform mapping of numbers, represented with a significand/mantissa and an exponent, onto the set of real numbers + the fact that the exponent used is in base 2, in the hardware, so the floating point numbers are spread about in a particular way. The difference between real numbers, as you tick up the odometer with each bit, varies depending on where you are in the number line (with big numbers, it's actually much more, with smaller numbers it's pretty minimal, but not unnoticeable as seen with this example. Does that make sense? Maybe I'm off about this... Anyway, still obviously recommend the lectures. And now, I'm going to read up more on ALUs and MUXs..)
<a href="http://0.30000000000000004.com/" rel="nofollow">http://0.30000000000000004.com/</a><p>Because why not? I've populated it with some languages that I can convenient access to an interpreter for. If you post/send me .1 + .2 in any other languages, I'll try and put them up.
This is the type of thing that makes mathematicians flying tackle computer scientists.<p>the CS: "Hey, it's round off error. Get used to it"<p>the Mathematician: "Fix IT!!"
From Slashdot: <a href="http://developers.slashdot.org/story/10/05/02/1427214/What-Every-Programmer-Should-Know-About-Floating-Point-Arithmetic?from=rss&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+Slashdot/slashdot+(Slashdot)" rel="nofollow">http://developers.slashdot.org/story/10/05/02/1427214/What-E...</a><p>Every programmer forum gets a steady stream of novice questions about numbers not 'adding up.'...
Leaving ego's aside, while the reasons for this are obvious to anyone who learned programming with binary and old school stuff like that, there is a whole genre (class) of programmer that has learned what is needed to build web sites and unless you have large scale issues (which you solve by hiring someone with old school CS skills) you can happily be competent and successful without every knowing about how languages implement floating point math. I think it is wrong to belittle these people because I have worked with them and sometimes, I have found, it is their skills that are more often more influential in the success of a product that the CS guy in the back room tweaking the slab allocator. Times have changed. It's no longer crucial for every one who deserves the title 'developer' to know about these kinds of language nuances.
Brendan Eich (Creator of JS) discussed this here: <a href="http://www.aminutewithbrendan.com/pages/20101025" rel="nofollow">http://www.aminutewithbrendan.com/pages/20101025</a> as it pertains to JS, but it is similar for other languages that implement IEEE double precision numbers
People often say "Just use (x) decimal arithmetic system for important stuff like finances."<p>Out of curiosity, I'm wondering how much trade you would have to be doing for floating-point imprecision to cause an actual problem.<p>Taking 0.2+0.1 as an example and figuring an imprecision of $0.00000000000000004 per $0.30, figuring a loss of one cent as being significant, I have 0.01/(0.00000000000000004 / 0.3) = 7.5e13, or... seventy-five trillion dollars?<p>Never mind that you're as likely to get 0.6+0.1 = 0.69999999999999996, which should roughly cancel out the error over time.<p>This is basically just an aesthetic problem in finance, yes?
If you really want precision and you're dealing only with rational numbers, it's better to maintain a struct rational { u64 numerator, denominator; }; and do all calculations with it.
I remember the discussion on floating point and talking about BCD (we had assembler on and IBM/370), but I get the feeling BCDs are not talked about much anymore given some of the discussions I've had over the years. A related thing to watch out for is any arithmetic with units and how to deal with fractional conversions. This could cost you a lot of money or mess up and inventory if handled poorly.
<a href="http://docs.sun.com/source/806-3568/ncg_goldberg.html" rel="nofollow">http://docs.sun.com/source/806-3568/ncg_goldberg.html</a><p>However, my problem is that modern language are hiding these things for you.