A couple of thoughts I've always had about floating-point arithmetic:<p>1. IMO it's unfortunate that most languages default to floating-point. Most programmers, most of the time, would be better served by slightly slower but less confusing alternatives (it's nice that Raku uses rational numbers by default: similarly for integers, it's great that Python uses arbitrary-precision integers by default). At any rate, programmers would be a lot less confused about floating-point arithmetic if they had to opt in to it explicitly, e.g. instead of 0.1 + 0.2 if they had to say something super-explicit like (just exaggerating a bit for effect, this is probably impractical anyway):<p><pre><code> NearestRepresentableSum(NearestRepresentable("0.1"), NearestRepresentable("0.2"))
</code></pre>
till they got the hang of it.<p>2. IMO when explaining floating-point arithmetic it helps to add a picture, such as this one (added to Wikipedia by a user named Joeleoj123 in Nov 2020): <a href="https://upload.wikimedia.org/wikipedia/commons/b/b6/FloatingPointPrecisionAugmented.png" rel="nofollow">https://upload.wikimedia.org/wikipedia/commons/b/b6/Floating...</a><p>With this picture (or a better version of it), one can communicate several main ideas:<p>- There are a finite number of representable values (the green points on the number line),<p>- Any literal like "0.1" or "0.2" or "0.3" is interpreted as the closest representable value (the closest green point),<p>- Arithmetic operations like addition and multiplication give the closest green point to the true sum,<p>- There are more of them near 0 and they get sparser away from 0 (the "floating-point" part),<p>etc.<p>Further, by staring at this picture, and the right words, one can infer (or explain) many of the important properties of floating-point arithmetic: why addition is commutative but not associative, why it is a good idea to add the small numbers first, maybe even the ideas behind Kahan summation and what not.