TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Determining A/B test sample size

25 pointsby noahnoahnoahover 13 years ago

4 comments

equarkover 13 years ago
Somebody really needs to write a Bayesian takedown of all these A/B testing articles. A/B testing is a Bayesian decision problem. There's really no other way to think about it. Determining sample size and frequentist confidence intervals are only relevant insofar as they approximate Bayesian concepts.<p>The issue is the proper tradeoff between exploration and exploitation. What drives the decision is outstanding uncertainty conditional on the data observed (not conditional on the null hypothesis of zero effect and some non-sequential iid sampling process), the discount rate (which is totally absent in this article), and the reward structure (which is not a Type I and Type II error).<p>The absurdity of the frequentist approach is clear from the admonition not to look at the results of the tests too often.
评论 #3019771 未加载
bryanhover 13 years ago
I rarely see many people take into account the opportunity cost of letting a really close A/B test reach 99.99% confidence when the benefit is by definition very marginal (that's why its taking so long, right?). I mean, is it really that bad to go on "close enough" results and move on to bigger and better tests?
评论 #3019694 未加载
DanielRibeiroover 13 years ago
Another way to see this, is to use this online calculator: <a href="http://visualwebsiteoptimizer.com/ab-split-significance-calculator/" rel="nofollow">http://visualwebsiteoptimizer.com/ab-split-significance-calc...</a>
Loicover 13 years ago
If you are lazy, you can get the functions coded in PHP here: <a href="http://abtester.com/calculator/" rel="nofollow">http://abtester.com/calculator/</a>