Statisticians call this <i>inverse transform sampling</i> [0]. The dual result, that Y = F(X) is uniformly distributed for F the cdf of any random variable X, is called the <i>probability integral transform</i> [1].<p>0. <a href="https://en.wikipedia.org/wiki/Inverse_transform_sampling" rel="nofollow">https://en.wikipedia.org/wiki/Inverse_transform_sampling</a><p>1. <a href="https://en.wikipedia.org/wiki/Probability_integral_transform" rel="nofollow">https://en.wikipedia.org/wiki/Probability_integral_transform</a>
For those needing more detail on generating numbers from a specific distribution there is Knuth's <i>The Art of Computer Programming</i>, Vol. 2 Seminumerical Algorithms, see section 3.4 <i>Other types of random quantities</i>. Special treatment is given for many distributions: normal, exponential, gamma, chi-squared, f-distribution, binomial, Poisson, along with discussion of general methods. Although there have been more recent advances in the generation of high-quality, high-speed generation of uniformly distributed random numbers, Knuth covers a whole range of issues related to pseudorandom number generation.
I used inverse transform sampling extensively in my thesis. I used it for fitting distributions to data (new methods)and similarity analysis of CDF's. In the software engineering field in am frequently appalled by how often basic probability techniques are ignored in favor of just the "mean" and "std. deviation".
I'm sorry, but this post is not a very good explanation. Here's why we should invert the CDF:<p>> The issue is that if we flip x and y’s in a PDF, there would be multiple y values corresponding to the same x. This isn’t true in a CDF.<p>Surely there must be a better reason to invert the CDF than that it's possible?<p>Any textbook will explain this much, much better.