# Short version<p>The Muller’s Recurrence is a mathematical problem that will converge to 5 only with precise arithmetics. This has nothing to do with COBOL and nothing to do with floating and fixed point arithmetics as such. The more precision your arithmetics has, the closer you get to 5 before departing and eventually converging to 100. Python's fixed point package has 23 decimal points of precision by default, whereas normal 64bit floating point has about 16 decimal points. If you increase your precision, you can linger longer near 5, but eventually you will diverge and then converge to 100.<p># Long version<p>What's going on here is that someone has tried to solve the roots of the polynomial<p><pre><code> x^3 - 108 x^2 + 815 x - 1500,
</code></pre>
which is equal to<p><pre><code> (x - 3)(x - 5)(x - 100).
</code></pre>
So the roots are 3, 5 and 100. We can derive a two-point iteration method by<p><pre><code> x^3 = 108 x^2 - 815 x + 1500
x^2 = 108 x - 815 + 1500/z
x = 108 - (815 - 1500/z)/y
</code></pre>
where y = x_{n-1} and z = x_{n-2}. But at this point, we don't know yet whether this method will converge, and if yes, to which roots.<p>This iteration method can be seen as a map F from R^2 to R^2:<p><pre><code> F(y,z) = (108 - (815 - 1500/z)/y, y).
</code></pre>
The roots of the polynomial are 3,5 and 100, so we know that this map F has fixed points (3,3), (5,5) and (100,100). Looking at the derivative of F (meaning the Jacobian matrix) we can see that the eigenvalues of the Jacobian at the fixed points are 100/3 and 5/3, 20 and 3/5, 1/20 and 3/100.<p>So (3,3) is a repulsive fixed point (both eigenvalues > 1), any small deviation from this fixed point will be amplified when the map F is applied iteratively. (100,100) is an attracting fixed point (both eigenvalues < 1). And (5,5) has one eigenvalue much larger than 1, and one slightly less than 1. So this fixed point is attracting only when approached from a specific direction.<p>Kahan [1, page 3] outlines a method to find sequences that converge to 5. We can choose beta and gamma freely in his method (Kahan has different values for the coefficients of the polynomial, though) and with lots of algebra (took me 2 pages with pen and paper) we can eliminate
the beta and gamma and get to the bottom of it. What it comes down to, is that for any 3 < z < 5, choose y = 8 - 15/z, and this pair z,y will start a sequence that converges to 5. But only if you have precise arithmetics with no rounding errors.<p>For the big picture, we have this map F, you can try to plot a 2D vector field of F or rather F(x,y) - (x,y) to see the steps. Almost any point in the space will start a trajectory that will converge to (100,100), except (3,3) and (5,5) are stable points themselves, and then there is this peculiar small segment of a curve from (3,3) to (5,5), if we start exactly on that curve and use exact arithmetics, we converge to (5,5).<p>Now that we understand the mathematics, we can conclude:<p>Any iteration with only finite precision will, at every step, accrue rounding errors and step by step end up further and further away from the mathematical curve, inevitably leading to finally converging to (100,100). Using higher precision arithmetics, we can initially get the semblance of closing in to (5,5), but eventually we will reach the limit of our precision, and further steps will take us away from (5,5) and then converge to (100,100).<p>The blog post is maybe a little misleading. This has nothing to do with COBOL and nothing to do with fixed point arithmetics. It just happens that by default Python's Decimal package has more precision (28 decimal places) than 64bit floating point (53 binary places, so around 16 decimals). Any iteration, any finite precision no matter how much, run it long enough and it will eventually diverge away from 5 and then converge to 100.<p>Specifically, if you were to choose floating point arithmetic that uses higher precision than the fixed point arithmetic, then the floating point would "outperform" the fixed point, in the sense of closing in nearer to 5 before going astray.<p>[1] <a href="https://people.eecs.berkeley.edu/~wkahan/Math128/M128Bsoln09Feb04.pdf" rel="nofollow">https://people.eecs.berkeley.edu/~wkahan/Math128/M128Bsoln09...</a>