Most -- nearly all -- benchmarking tools like this work from a normality assumption, i.e. assume that results follow the normal distribution, or is close to it. Some do this on blind faith, others argue from the CLT that "with infinite samples, the mean is normally distributed, so surely it must be also with finite number of samples, at least a little?"<p>In fact, performance numbers (latencies) often follow a heavy-tailed distribution. For these, you need a literal shitload of samples to get even a slightly normal mean. For these, the sample mean, the sample variance, the sample centiles -- they all severely underestimate the true values.<p>What's worse is when these tools start to remove "outliers". With a heavy-tailed distribution, the majority of samples don't contribute very much at all to the expectation. The strongest signal is found in the extreme values. The strongest signal is found in the stuff that is thrown out. The junk that's left is the noise, the stuff that doesn't tell you very much about what you're dealing with.<p>I stand firm in my belief that unless you can prove how CLT applies to your input distributions, you should not assume normality.<p>And if you don't know what you are doing, stop reporting means. Stop reporting centiles. Report the maximum value. That's a really boring thing to hear, but it is nearly always statistically and analytically meaningful, so it is a good default.