For another visual explanation in words, floating point numbers (ignoring subnormal) are just a linear approximation of 2^x [1] where there is one piece for each integer (x = 4 to x = 5, etc). As an example, draw a straight line between 2^4 (16) and 2^5 (32). The floating point numbers in that range are evenly spaced on that line.<p>Another explanation using the window + offset terminology used in the post is that the offset is a percentage of the way through the window. So, for a window of 2^x, the difference between an offset of y and y + 1 is 2^(x-23) or 1/2^(-23) of 2^x. Put another way, floating point numbers do not have absolute error like integers (each number is within 1 of a representable value), but % error (each number is within 1/2^(-23) of a representable value). Essentially, floating point numbers use % error bars instead of absolute error bars.<p>Using this model you can even see how to create your own floating point numbers. Just pick a % precision you want, for single FP that is 1/2^(-23) and double FP 1/2^(-52), that defines the range of your mantissa (offset). Then pick a range of x values you want to represent, that is the range of your exponent (window).<p>As an aside, subnormal numbers do not respect this principle. They extend the expressible range for very small numbers by sacrificing % error for those numbers. In the very worst case of the smallest subnormal number you can get 25% error (it might actually be 50%). As might be imagined, this plays havoc on error propagation since if you ever multiply by a number that just so happens to be the smallest subnormal, all your multiplies might suddenly be off by a factor of 25% instead of the normal 100 * 2^(-23)% which is 2,000,000 times the % error which is quite a bit harder to compensate for. This is why many people consider subnormals to be a blemish.<p>[1] The approximation is actually offset in the x direction for the bias. If you want to be more accurate, you are actually graphing 2^(x - 127).