The "default" alpha and beta are not the correct ones for a website A/B test.<p>If you're designing a drug, you'd better be very careful not to accidentally approve something that is useless. It would cost a ton of money and lives in the long run if it was no better than placebo. False rejection of the null is very bad. False acceptance of the null is not so bad.<p>By contrast, if you're doing an A/B test on a website, you're actually not in bad shape if you accidentally think that a red button is a bit better than a blue button, assuming that they're pretty close. False rejection of the null is okay.<p>However you are screwed if you miss out on the chance that a red button gives you 50% more conversion. With websites, false acceptance of the null is very bad. It's okay to mistakenly think your button is effective but it's very bad to mistakenly think that the button is ineffective.<p>Websites have the opposite cost benefit calculation to science generally and shouldn't use the same parameters.
> [F]or any study that requires sampling ... making sure we have enough data to ensure confidence in results is absolutely critical.<p>Is this necessarily true if you can sample from the population in a fair and unbiased way?