TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A/B Testing: How Much Data Do You Need?: Blog: Fuel Interactive

5 pointsby RexDixonover 15 years ago

2 comments

aresantover 15 years ago
If you're interested in this post, the real deal to knowing how much data you need to determine if a test is a valid winner is best measured by standard deviation.<p>Detailed explanation of this in these posts below:<p><a href="http://www.conversionvoodoo.com/blog/what-is-ab-and-multivariable-testing/" rel="nofollow">http://www.conversionvoodoo.com/blog/what-is-ab-and-multivar...</a><p><a href="http://blog.joshbaker.com/2009/01/21/standard-deviation-and-marketing-how-why/" rel="nofollow">http://blog.joshbaker.com/2009/01/21/standard-deviation-and-...</a><p><a href="http://snaphawk.blogspot.com/2009/07/how-does-google-website-optimizer.html" rel="nofollow">http://snaphawk.blogspot.com/2009/07/how-does-google-website...</a>
jfarmerover 15 years ago
Not much content in the article, and they don't even answer the question (which involves math).<p>How much data do you need? This depends on two things: the expected effect size, and your confidence interval.<p>With enough data you will ALWAYS get statistical significance. The tighter your confidence interval the more data you need, and the smaller your expected effect size the more data you need.<p>For example, these are two different questions: 1. What is the likelihood that the observed difference between the test and control candidates is real? 2. What is the likelihood that the test candidate is at least a 20% improvement over the control candidate?<p>If you're swinging for the fences and need 10-50% improvements in your metrics, you can shut down tests early that prove unlikely to generate those kinds of returns.<p>The "usual" way of doing things is to let the A/B test run until you reach statistical significance, regardless of effect size. But unless you're Google or Facebook, spending 100,000 impressions to get your 1% improvement is probably not worth it.