Here is an alternate theory.<p>Stick the numbers post-conversion in to <a href="http://elem.com/~btilly/effective-ab-testing/g-test-calculator.html" rel="nofollow">http://elem.com/~btilly/effective-ab-testing/g-test-calculat...</a> (41 successes out of 638 trials versus 35 successes from 416 trials) and the conclusion of unequal performance has 72.42% confidence. Meaning that more than 1 time in 4 you'd have a difference that big or bigger by chance.<p>In other words the entire basis of this post could be a chance statistical fluctuation that should be ignored.<p>It is true that there can be effects where pushing less qualified leads through the top stage of the funnel doesn't get them to the end. However my experience with A/B testing is that it is more common for the extra people put in the system by an A/B test at the top to convert the rest of the way relatively similarly.<p>But not always! Which is why if you have sufficient volume you should always measure to actual sales. There is no other way to be absolutely sure that you are improving end sales.<p>However in this example that would mean running the test for something like 20x as long. In that case it makes sense to be pragmatic, test from one step of the funnel to the next, and then pivot on the answers you get. Furthermore to start you should focus on the top of the funnel for the simple reason that higher volumes will get you answers faster there - you can easily try a dozen ideas before you could test one idea deeper in the funnel.<p>Once you've improved your site enough to get a better percentage of actual sales, you'll be able to purchase more traffic. Doing both of those things will put you in a position to conduct more rigorous A/B tests to eke out more subtle differences. But that is down the road. Focus on testing what is easiest in the quickest possible way first.