<i>If you aim to make inferences about which ideas work best, you should pick a sample size prior to the experiment and run the experiment until the sample size is reached.</i><p>That's not a very Bayesian thing to say. It doesn't matter what sample size you decided to pick at the beginning. A Bayesian method should yield reasonable results at every step of the experiment, and allows you to keep on testing until you feel comfortable with the posterior probability distributions.<p>If 10 customers have converted so far, and 30 haven't, then you would expect the conversion rate to be somewhere between 10% and 40%, as evidenced by this graph of the Beta distribution(10,30):<p><a href="http://www.wolframalpha.com/input/?i=plot+BetaDistribution+10+30" rel="nofollow">http://www.wolframalpha.com/input/?i=plot+BetaDistribution+1...</a><p>You then do the same with method B, and stop testing once the overlap between the two probability distributions looks small enough.<p>Anscombe's rule is interesting, but it seems rather critically dependent on the number of future customers, which is hard to estimate. The advantage of the visual approach outlined above is that it's more intuitive, and people can use their best judgment to decide whether to keep on testing or not.<p><i>Disclaimer</i>: I am not an A/B tester.