What's the whole reason behind distinguishing between integers and float/doubles?<p>units = 2<p>price = 3.17<p>2 * 3.17 = ERROR<p>It blows my mind that so much effort has put into things like functions taking functions as arguments, and the characteristics of classes - yet a computer can't handle basic calculator math out of the box. PHP, Ruby, Swift, OCaml.. what gives?<p>What is the complexity behind this?
The most of programming languages suck even with integer numbers, their integers is more like ring of integers modulo 2^n.[1] Its not true for every programming language, but for languages like C, C++ it is. To implement math integers one will need to allow that integers to grow in size and to be more like strings. But such a engineering decision will hurt speed of program and raise memory usage. Though some languages do it. Common Lisp, for example, uses arbitrary precision numbers, and so you can get very big numbers while calculating without any special efforts like using external library for arbitraty precision numbers, GMP for instance.[2]<p>Computer numbers is not math numbers, and its better to always remember this fact.<p>[1] <a href="https://en.wikipedia.org/wiki/Multiplicative_group_of_integers_modulo_n" rel="nofollow">https://en.wikipedia.org/wiki/Multiplicative_group_of_intege...</a><p>[2] <a href="https://gmplib.org/" rel="nofollow">https://gmplib.org/</a>
This has been answered pretty extensively on Q&A sites like StackOverflow and Quora, so I'd recommend Googling it for the most thorough, specific, and well-vetted answers.<p>Here are links to get you started:<p>- <a href="https://stackoverflow.com/questions/1089018/why-cant-decimal-numbers-be-represented-exactly-in-binary" rel="nofollow">https://stackoverflow.com/questions/1089018/why-cant-decimal...</a><p>- <a href="https://stackoverflow.com/questions/5098558/float-vs-double-precision" rel="nofollow">https://stackoverflow.com/questions/5098558/float-vs-double-...</a><p>> <i>It blows my mind that so much effort has put into things like functions taking functions as arguments, and the characteristics of classes - yet a computer can't handle basic calculator math out of the box.</i><p>Much effort <i>was</i> put into it, but no amount of work allows a computer to violate the laws of math. The fact that computers use binary and have limited memory isn't something that can be hand-waved away.<p>You're also talking about two totally different types of problems. Classes make programming languages easier to organize (at least theoretically) -- it's easy to change the syntax of a language, but hard to know exactly how it should look to be the easiest to use.
> What's the whole reason behind distinguishing between integers and float/doubles?<p>Because historically CPUs worked efficiently natively on integers, and everything else was software. (Modern CPUs also work efficiently natively on floats, but with different representation and operations.)<p>Note that not all languages (even o n your list) have the problem you describe though; plenty of automatic coercion with numeric operators so, e.g., float times int works (doing a float operation). Ruby, for instance, does this.<p>And many also do the calculation you present exactly (not merely without errors) because they treat decimal literals as exact numbers (using either a decimal or rational type, rather than binary floating point.) Perl 6 and Scheme, for instance. I think Haskell also can, if the right priority for numeric literal types is set.<p>> What is the complexity behind this?<p>It's not really complex, it's just a matter of prioritizing making performance vs. accuracy & generality simple, with a dash of consideration of programming history which shapes expectations.
Short answer, no decimal in binary and math is hard. :) Here's a couple handy videos explaining it:<p><a href="https://www.youtube.com/watch?v=PZRI1IfStY0" rel="nofollow">https://www.youtube.com/watch?v=PZRI1IfStY0</a><p><a href="https://www.youtube.com/watch?v=Zzx7HN4wo3A&t=59s" rel="nofollow">https://www.youtube.com/watch?v=Zzx7HN4wo3A&t=59s</a>
Simply put: Out compilers and interpreters force us humans to think more like the underlying machines, rather than vice-versa. It is ass-backwards. We should absolutely demand that our tools help the computers under what we want and not the other way around.