Nifty! Step up over just using time.
Unfortunately, timing things in general isn't going to be a very effective benchmark.<p>Without understanding what a program is doing, you don't understand what is impacting your results, and have no real knowledge on how things are going to differ when you go to use them in the "real world". Is one process faster when single threaded vs. a low core count, but another is massively parallel, and loses out until scaled higher? Are your commands testing the thing you think they're testing? What is your limiting factor? If you don't know why the results are what they are, instead of higher, you don't have a good benchmark.<p><a href="http://www.brendangregg.com/activebenchmarking.html" rel="nofollow">http://www.brendangregg.com/activebenchmarking.html</a> / <a href="http://www.brendangregg.com/ActiveBenchmarking/bonnie++.html" rel="nofollow">http://www.brendangregg.com/ActiveBenchmarking/bonnie++.html</a>
Nice, that looks very handy especially the analysis of multiple runs.<p>Just today I was playing with /proc/sys/vm/drop_caches, I'd never used it before, it makes a massive difference reading from a spinning disk!<p>For example to read tens of thousands of files (using 8 processes), it would take me<p><pre><code> real 5m33.048s
</code></pre>
Then if I ran the command again, without flushing the cache, it'd take:<p><pre><code> real 0m6.502s</code></pre>
When given multiple commands, can it interleave executions instead of benchmarking them one after the other?<p>This would be useful when comparing two similar commands, as interleaving them makes it less likely that e.g. a load spike will unfavorably affect only one of them, or due to e.g. thermal throttling negatively affecting the last command.
Tangentially related, look into rt-tests (from linux-rt) for scheduler latency tests.<p>See: <a href="https://wiki.archlinux.org/index.php/Realtime_kernel" rel="nofollow">https://wiki.archlinux.org/index.php/Realtime_kernel</a><p>The effect of the linux-rt patchset is dramatic.