TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Hyperfine – a command-line benchmarking tool

96 pointsby sharkdpover 5 years ago

5 comments

kqrover 5 years ago
Most -- nearly all -- benchmarking tools like this work from a normality assumption, i.e. assume that results follow the normal distribution, or is close to it. Some do this on blind faith, others argue from the CLT that &quot;with infinite samples, the mean is normally distributed, so surely it must be also with finite number of samples, at least a little?&quot;<p>In fact, performance numbers (latencies) often follow a heavy-tailed distribution. For these, you need a literal shitload of samples to get even a slightly normal mean. For these, the sample mean, the sample variance, the sample centiles -- they all severely underestimate the true values.<p>What&#x27;s worse is when these tools start to remove &quot;outliers&quot;. With a heavy-tailed distribution, the majority of samples don&#x27;t contribute very much at all to the expectation. The strongest signal is found in the extreme values. The strongest signal is found in the stuff that is thrown out. The junk that&#x27;s left is the noise, the stuff that doesn&#x27;t tell you very much about what you&#x27;re dealing with.<p>I stand firm in my belief that unless you can prove how CLT applies to your input distributions, you should not assume normality.<p>And if you don&#x27;t know what you are doing, stop reporting means. Stop reporting centiles. Report the maximum value. That&#x27;s a really boring thing to hear, but it is nearly always statistically and analytically meaningful, so it is a good default.
评论 #21251940 未加载
评论 #21251757 未加载
评论 #21254498 未加载
评论 #21251966 未加载
评论 #21251762 未加载
sharkdpover 5 years ago
I have submitted &quot;hyperfine&quot; 1.5 years ago when it just came out. Since then, the program has gained functionality (statistical outlier detection, result export, parametrized benchmarks) and maturity.<p>Old discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16193225" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16193225</a><p>Looking forward to your feedback!
评论 #21251179 未加载
mplanchardover 5 years ago
I started using hyperfine a few months ago now on a colleague’s recommendation and I really like it.<p>In the past, I’ve cobbled together quick bash pipelines to run time in a loop, awk out timings, and compute averages, but it was always a pain. Hyperfine has a great interface and really useful reports. It actually reminds me quite a bit of Criterion, the benchmarking suite for Rust.<p>I also use fd and bat extensively, so thanks for making such useful tools!
评论 #21249384 未加载
breckover 5 years ago
This is great! I was looking for something like this a year ago for benchmarking imputation scripts as part of a paper. This would have been awesome to use. Will keep it in my in the future.
Myrmornisover 5 years ago
hyperfine is really nice!<p>FWIW I wrote a rough first version of a tool that runs a hyperfine benchmark over all commits in a repo and plots the results in order to see which commits cause performance changes: <a href="https:&#x2F;&#x2F;github.com&#x2F;dandavison&#x2F;chronologer" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;dandavison&#x2F;chronologer</a>
评论 #21256298 未加载