Sharing this paper because effective workload simulation is a necessary step toward the Holy Grail: workload aware and self-tuning caches. In plain english, that means a cache that can adjust its parameters so that it is most effective for a given workload. Ideally, you want all of that without human supervision or pre-trained models, and in its fully accomplished form the cache can provision resources if it is worth it. Note that effectiveness here is traditionally measured in terms of hit:miss ration but other metrics exist too.<p>That's part of the broader idea of "intelligent systems" pro-actively adjusting themselves (resources, internal structure, overall policies) to the workload they are fed in order to optimize performance or cost-effectiveness.<p>The most basic caches have static eviction policies (e.g LRU/LFU), there has been some work on adaptive eviction policies (e.g ARC), and more recently on a clever combination of adjusting eviction policies and <i>admission policies</i> (e.g TinyLFU). The overarching goal of this line of research is to figure out a "silver bullet" cache which perform optimally (or close-to) regardless of R/W distributions by scrambling between different policies wrt. admission and eviction.