I have stumbled a few times and in many places on the famous "latency numbers every programmer should know". From what I understand they were first popularised by Peter Norvig. [0]<p>My question is: how would one go about measuring this kind of numbers? is there a way to actually use some code to make this measurements? even better would be if there was a public repository that one could use to measure some of these numbers.<p>I would also be happy on a repository where you only measure two of those: read 1 MB sequentially from memory or read 1 MB sequentially from disk.<p>[0]: http://norvig.com/21-days.html#answers
Some comes from datasheets (HDD seek), some comes from pure mathematical calculations (network throughput), and some are kind of "today it might be true for a specific configuration, but in 20 minutes it will be just a random outdated number" (branch misprediction or fetching from memory - these are extrapolated from number of CPU cycles needed to execute them, which also come from datasheets). And of course there are some that can be easily measured with a stopwatch (or code, if you are that kind of person), like network latency.
bpftrace<p><a href="https://github.com/iovisor/bpftrace">https://github.com/iovisor/bpftrace</a><p>*for real-world see the tools/examples directory<p>and also don't miss the refenrce guide<p><a href="https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md">https://github.com/iovisor/bpftrace/blob/master/docs/referen...</a>
on this topic this is also great: <a href="http://computers-are-fast.github.io" rel="nofollow noreferrer">http://computers-are-fast.github.io</a>