TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Cache Modeling and Optimization Using Miniature Simulations [pdf]

30 pointsby erwanover 6 years ago

1 comment

erwanover 6 years ago
Sharing this paper because effective workload simulation is a necessary step toward the Holy Grail: workload aware and self-tuning caches. In plain english, that means a cache that can adjust its parameters so that it is most effective for a given workload. Ideally, you want all of that without human supervision or pre-trained models, and in its fully accomplished form the cache can provision resources if it is worth it. Note that effectiveness here is traditionally measured in terms of hit:miss ration but other metrics exist too.<p>That&#x27;s part of the broader idea of &quot;intelligent systems&quot; pro-actively adjusting themselves (resources, internal structure, overall policies) to the workload they are fed in order to optimize performance or cost-effectiveness.<p>The most basic caches have static eviction policies (e.g LRU&#x2F;LFU), there has been some work on adaptive eviction policies (e.g ARC), and more recently on a clever combination of adjusting eviction policies and <i>admission policies</i> (e.g TinyLFU). The overarching goal of this line of research is to figure out a &quot;silver bullet&quot; cache which perform optimally (or close-to) regardless of R&#x2F;W distributions by scrambling between different policies wrt. admission and eviction.
评论 #18319250 未加载
评论 #18320274 未加载
评论 #18321222 未加载
评论 #18319143 未加载