Bruce Dawson always does an excellent job of explaining subtle floating-point behaviors simply, but there is one sentence I do not agree with in this particular post:<p>> However, for any rounding rule that you might come up with there is a case where the double rounding will give you a different answer from a true FMA.<p>For every “directed” rounding (up, down, towards zero), rounding the result of one operation first to higher precision and then to the intended precision is identical to rounding directly to the intended precision. For this reason, computing the FMA as “first compute the multiplication in higher precision so that no rounding happens in this step, then add the third operand at the same precision, then round to the nominal precision” does not suffer from double-rounding issues in all these rounding modes (which are all the rounding modes defined by IEEE 754 other than “round to nearest”).<p>So you do not even need to “come up with” them. They already exist, they are all the standardized rounding modes other than “round to nearest”.<p>Note: the reasoning above assumes the result of the multiplication is representable as a normal number in the higher-precision format. It is a property of IEEE 754 formats that the next more precise one can always represent the result of the multiplications of two finite numbers from the format below it as a normal number.