This is my favorite example of the Monte Carlo method. I first learned it as a farmer needs to estimate the size of a circular pond surrounding by a square fence he can't see over, and he's given an infinite supply of pebbles. It's a wonderful learning tool. One question that comes up involves the finite precision of the variables you're using. In this post, it would be this part:<p><pre><code> (x - 0.5)**2 + (y - 0.5)**2 <= 0.25
</code></pre>
In C, it's easy to state the limit on how precise the final estimate for pi can be because you have to state the type of the variable you're using to store it, and you know have many bits that type gets allocated. In Python, this doesn't seem so simple (<a href="https://docs.python.org/3/tutorial/floatingpoint.html#representation-error" rel="nofollow">https://docs.python.org/3/tutorial/floatingpoint.html#repres...</a>).<p>I question whether Python knows internally to use a heap to keep scaling how many digits can fit in the pi estimate as the Monte Carlo goes on, or if there's a limit to existing pre-defined sizes that caps out the precision. Also, it isn't readily obvious to me how to parameterize the np.random call to keep getting more and more digits in the values for x and y so the above calculation can keep getting more precise. It'd be cool to see a Python implementation that really can scale until an arbitrary amount of available memory is used, and see the relationship of memory used to pi digits calculated.