TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What is your one tip about A/B testing you would share?

4 pointsby raycloydover 10 years ago
Wondering what everyone would tell someone new to a/b testing for website and conversion rate optimization. Could be anything from expectations to approach, etc. Thanks!

9 comments

Someone1234over 10 years ago
That most A&#x2F;B tests don&#x27;t prove what they aim to prove. People just add a new version of the site, then wait until 100 people have seen it, measure a small improvement and migrate to the new version.<p>However if you then run the test a second time, but make the &quot;old&quot; site the alternative, you might find it too sees a small measure of improvement and thus you could bounce back and forth between A and B until the end of time.<p>I am not a statistician so I won&#x27;t try to give you advice on what a statistically significant result is. However many &#x2F;many&#x2F; articles have been written on that topic, and many products have been designed which you can slot your data into to see if the &#x27;B&#x27; option is legitimately better.
onion2kover 10 years ago
Learn what &quot;statistical power&quot; (<a href="http://en.wikipedia.org/wiki/Statistical_power" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Statistical_power</a>) means, and then understand that the fact n% of users prefer one option <i>doesn&#x27;t</i> necessarily make it a good idea. Statistical significance is <i>far</i> more important than the actual result. This is a really good paper about it: <a href="http://www.qubitproducts.com/sites/default/files/pdf/most_winning_ab_test_results_are_illusory.pdf" rel="nofollow">http:&#x2F;&#x2F;www.qubitproducts.com&#x2F;sites&#x2F;default&#x2F;files&#x2F;pdf&#x2F;most_wi...</a>
dairgramover 10 years ago
Before you get too far, try an A&#x2F;A test. By this, I mean let the A and B choices be identical. You would certainly expect the outcomes to be equal. Right?<p>I have seen statistically significant differences in outcomes in A&#x2F;A testing.<p>A&#x2F;B testing has value but being sure to A&#x2F;A test may temper your expectations and&#x2F;or point at problems in your setup before you get too far.
domrdyover 10 years ago
<a href="http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky" rel="nofollow">http:&#x2F;&#x2F;nginx.org&#x2F;en&#x2F;docs&#x2F;http&#x2F;ngx_http_upstream_module.html#...</a> - to make clients &#x27;stick&#x27; to upstreams, if you&#x27;re using nginx.
mtmailover 10 years ago
If you radically change a feature then regular users will first start playing with it. Simply because it&#x27;s new. In that case your test needs be longer or exclude regular users.
seekingcharlieover 10 years ago
Ensure that your testing things that are actually going to impact your conversion rate or goals. You don&#x27;t have to test everything - just the things that matter.
siddharthdeswalover 10 years ago
Don&#x27;t A&#x2F;B test your credibility. Not everything should be (or can be) subjected to hypothesis testing.
hkielyover 10 years ago
Choose and test one independent variable at a time. Then, check your results for statistical significance.
catman01over 10 years ago
Kittens will always improve conversion rates.