Very nice, much faster than simulating it. I guess if you're using an informative prior, instead of adding 1 to the number of successes and failures, you add the corresponding parameters of your beta prior?<p>A pretty good shortcut (if you don't have a log-beta function, for example) is to approximate both A and B with the normal distribution. Then their difference is also normal, so you can just check the probability that a normal random variable is greater than zero.<p>Specifically, the mean μ of a beta distribution is α/(α+β) and the variance σ^2 is αβ/((α+β)^2(α+β+1)). Use these as the parameters for your normal approximation, and we have the difference D ~ N(μ_A-μ_B, σ_A^2+σ_B^2). The probability that B beats A is just the CDF of D evaluated at 0.<p>In Python:<p><pre><code> from scipy.stats import norm as norm
def beta_mean(a, b):
return a/(a+b)
def beta_var(a, b):
return a*b/((a+b)**2*(a+b+1))
def probability_B_beats_A(α_A, β_A, α_B, β_B):
mu = beta_mean(α_A, β_A) - beta_mean(α_B, β_B)
sigma = (beta_var(α_A, β_A) + beta_var(α_B, β_B))**.5
return norm.cdf(0, mu, sigma)</code></pre>