TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A/B testing our friends

8 pointsby azariasover 12 years ago

6 comments

birkenover 12 years ago
I'd be slightly more careful about making the claim you are making (though in your defense, nearly every person who makes a boastful post about their A/B testing is makes the same mistake).<p>For starters, you are picking an incredibly noisy metric, taking the maximum observed conversion and the minimum observed conversion and then dividing them. To better illustrate this, I wrote a little test script to simulate what you did many times. It assumes you have 4 email variations, send 125 emails each, and the average conversion rate is 5% across the buckets. Then it takes the minimum converting bucket, the maximum converting bucket, and divides them. Then does that 10k times and averages out what the gain would be by this metric, which is ~2.7x (code here: <a href="https://gist.github.com/3846278" rel="nofollow">https://gist.github.com/3846278</a>). 2.7x is a lot, but of course in this example it is pure noise, every single bucket converts exactly the same over the long term.<p>This metric also is particularly unhelpful because your goal is to have one high converting email. The metric you have chosen will be artificially boosted if you happen to have one horribly converting email. While that is potentially interesting fodder in just how differently various emails can convert, having one horrible email doesn't help you very much.<p>A better way for you to do this analysis is to dump your raw results into a mathematically sound A/B test calculator (I'm a bit of a homer but I don't think you can beat ABBA [<a href="http://www.thumbtack.com/labs/abba/]" rel="nofollow">http://www.thumbtack.com/labs/abba/]</a>), then look at the confidence conversion ranges of the various emails and only make claims based on that. Like... I tried 4 versions of email copy and got my conversion rate up to X% (+/- some hopefully small confidence interval)! One of the emails was a real stinker and only converted at Y% (+/- confidence interval), thankfully I A/B tested first and didn't end up getting stuck with that one!
评论 #4622140 未加载
partymonover 12 years ago
I think one thing to be careful about is not to read too much into response from your friends... they are already biased. That said, the difference within that set is interesting.
评论 #4621667 未加载
fluxonover 12 years ago
Cache, in case it 404s - <a href="http://webcache.googleusercontent.com/search?q=cache:blog.meritful.com/post/33023288815/a-b-testing-our-friends" rel="nofollow">http://webcache.googleusercontent.com/search?q=cache:blog.me...</a>
jackdsover 12 years ago
When does family/friends announcements turn to spam? I know many marketing email providers are weary of writing to your entire address book of people who did not opt in.
评论 #4621818 未加载
b2rockover 12 years ago
Are there any best practices for launch day email announcements? Good ways to build a list, or take advantage of one you already have?
gojomoover 12 years ago
From the graph labels, it appears the subject line which moved the most relevant info to the front won, as opposed to leaving out the startup name or pushing it to the end.<p>But, they don't mention the sample size, and they do mention that there were other changes in the email content and call-to-action. Nor do they make any mention of statistical significance.<p>So we can't tell what actually helped, only that perhaps (<i>if</i> we assume did their significance-testing correctly) that some set of changes can make a big difference.
评论 #4621810 未加载