Knowing these latency numbers is essential for writing efficient code. But with modern out-of-order processors, it's often difficult to gauge how much the latency will hurt throughput without closer analysis. I'd love if the math for this analysis and the associated hardware limits were also better known. Little's Law is a fundament of queuing theory. The original article is very readable: <a href="http://web.mit.edu/sgraves/www/papers/Little's%20Law-Published.pdf" rel="nofollow">http://web.mit.edu/sgraves/www/papers/Little's%20Law-Publish...</a><p>It says that for a system in a stable state, "occupancy = latency x throughput". Let's apply this to one of the latencies in the table: main memory access. An obvious question might be "How many lookups from random locations in RAM can we do per second?" From the formula, it looks like we can calculate this (the 'throughput') if we knew both the 'latency' and the 'occupancy'.<p>We see from the table that the latency is 100 ns. In reality, it's going to vary from ~50 ns to 200 ns depending on whether we are reading from an open row, on whether the TLB needs to be updated, and the offset of the desired data from the start of the cacheline. But 100 ns is a fine estimate for 1600 MHz DDR3.<p>But what about the occupancy? It's essentially a measure of concurrency, and equal to the number of lookups that can be 'in flight' at a time. Knowing the limiting factor for this is essential to being able to calculate the throughput. But oddly, knowledge of what current CPU's are capable of in this department doesn't seem to be nearly as common as knowledge of the raw the latency.<p>Happily, we don't need to know all the limits of concurrency for memory lookups, only the one that limits us first. This usually turns out to be the number of outstanding L1 misses, which in turn is limited by the number of Line Fill Buffers (LFB's) or Miss Handling Status Registers (MSHR's) (Could someone explain the difference between these two?).<p>Modern Intel chips have about 10 of these per core, which means that each core is limited to having about 10 requests for memory happening in parallel. Plugging that in to Little's Law:<p><pre><code> "occupancy = latency x throughput"
10 lookups = 100 ns x throughput
throughput = 10 lookups / 100 ns
throughput = 100,000,000 lookups/second
</code></pre>
At 3.5GHz, this means that you have a budget of about 35 cycles of CPU that you can spend on each lookup. Along with the raw latency, this throughput is a good maximum to keep in mind too.<p>It's often difficult to sustain this rate, though, since it depends on having the full number of memory lookups in flight at all times. If you have any failed branch predictions, the lookups in progress will be restarted, and your throughput will drop a lot. To achieve the full potential of 100,000,000 lookups per second per core, you either need to be branchless or perfectly predicted.