> The turnaround time also imposes a welcome pressure on experimental design. People are more likely to think carefully about how their controls work and how they set up their measurements when there's no promise of immediate feedback.<p>This seems like a cranky rationalization of the lack of a fairly ordinary system.<p>Sure, you shouldn't draw conclusions for potentially small effects on < 24 hours of data. But of course if you've done any real world AB testing, much less any statistics training, you should already know that.<p>What this means is you can't tell whether an experiment launch has gone badly wrong. Small effect size experiments are one thing, but you can surely tell if you've badly broken something in short order.<p>Contrary to encouraging people to be careful, it can make people risk averse for fear of breaking something. And it slows down the process of running experiments a lot. Every time you want to launch something, you probably have to launch it at a very small % of traffic, then you have to wait a full 24-36 hours to know whether you've broken anything, then increase the experiment size. Versus some semi-realtime system: launch, wait 30 minutes, did we break anything? No? OK, let's crank up the group sizes... Without semi-realtime, you have to basically add two full days times 1 + the probability of doing something wrong and requiring relaunch (compounding of course) to the development time of everything you want to try. Plus, if you have the confidence that you haven't broken anything you can much larger experiment sizes so you get significant results much faster.