Kind confused what the point is. Is this a complaint that most languages use IEEE 754 for non-integer numbers and he thinks they shouldn't, or a veiled dig about how many programmers don't know this, or...?<p>The color coding of results suggests the author thinks that 2 is wrong and 1 is right, but he's going out of his way to specify floating point numbers, and when subtracting those two floating point numbers the correct answer is 1 and NOT 2.<p>Eg, Ruby thinks 9999999999999999.0 - 9999999999999998.0 = 2, but 9999999999999999 - 9999999999999998 = 1. Which is...correct. Right? Unless you don't think IEE 754 should be the default?<p>I feel like the author is trying to make a clever point, but if so, I'm not getting it.