TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

FIFO can be Better than LRU [pdf]

93 点作者 1a1a11a将近 2 年前

12 条评论

ramses0将近 2 年前
Rough Summary:<p>Start with a dumb-cache using &quot;FIFO&quot; (first in&#x2F;first out)<p>Keep track of anything with &quot;hits&quot; while in the cache (simple boolean&#x2F;bitmap instead of complicated locks&#x2F;data structures)<p>Re-insert &quot;hit&quot; items when they would normally be evicted&#x2F;age out (Lazy Promotion).<p>BTW, also have a mini-cache in front of the main cache (10% of cache size) which tries to &quot;drop off quickly&quot; (effectively: must have a &quot;hit&quot; within first 10% of being cached).<p>BTW, also keep track of a &quot;ghost cache&quot; (&quot;hits only&quot;, not contents??) for the duration of the overall FIFO-cache, and use that to guide eviction&#x2F;re-insertion. I&#x27;m a little bit unclear on this aspect, but it seems like an &quot;obvious in retrospect&quot; set of guidance.
评论 #36442442 未加载
评论 #36442481 未加载
评论 #36445532 未加载
vlovich123将近 2 年前
I wonder why all these papers ignore comparison against W-TinyLFU.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ben-manes&#x2F;caffeine&#x2F;wiki&#x2F;Efficiency">https:&#x2F;&#x2F;github.com&#x2F;ben-manes&#x2F;caffeine&#x2F;wiki&#x2F;Efficiency</a> Shows that it really outperforms ARC as well and they also have an optimal oracle version that they evaluate against to show how much room there is left (admittedly the oracle version itself implies you’re picking some global criterion to optimize but that’s trickier when in reality there are multiple axes along which to optimize and you can’t simultaneously do well across all of them).<p>Also the lack of evaluation of the cache strategy against shifting workloads is also problematic.
评论 #36444350 未加载
singron将近 2 年前
Beating plain LRU isn&#x27;t very interesting, but they also evaluated a bunch of other algorithms (e.g. ARC) and concluded that it performed better than those as well.<p>I know the ARC paper discusses that other algorithms are often better if properly tuned, although ARC is usually consistently decent in a variety of situations without tuning. It would be awesome to have a new algorithm in that space.
评论 #36447070 未加载
1a1a11a将近 2 年前
A recent study on over 5000 (key-value, block, object) cache traces from 2007 to 2020 shows that FIFO-Reinsertion is not only faster and more scalable, it is also more efficient (has a lower miss ratio). Maybe it is time to drop LRU from our classroom?
anotherhue将近 2 年前
Might I also recommend: Bryan Cantrill on ARC: A Self-Tuning, Low Overhead Replacement Cache [PWL SF] 10&#x2F;2017<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=F8sZRBdmqc0">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=F8sZRBdmqc0</a>
ojosilva将近 2 年前
I&#x27;ve spent the last month or so working on a write-behind cache in both Rust and Zig, with a RAM + Disk component, similar to what a OS kernel memory page swap does, but at the app level and key-value-oriented.<p>My experience trying out cache algorithms is that they are all very generic, cache benchmarks are typically based on random distributions or on web-request datasets (twitter, CDNs, ...) that may not match your use case. Mine is about caching data stream transformations with specific ORDER BYs. Hits may come at a very wide point in time and LFU ended-up working better for me. Also your eviction policy choice is very important like number of items or RAM use (my case). So don&#x27;t go running to the latest &quot;benchmark proven&quot; cache algorithm paper without weighting in your specific needs.
评论 #36447089 未加载
NovaX将近 2 年前
The LIRS papers do a good job of taking different workload patterns, such as loop and zig-zag and mixing multiple patterns together, to validate that their algorithm is robustly good as a general purpose cache. This seems to take traces with similar workload patterns that are distinguished by their excessive size to assert the algorithm&#x27;s quality. When papers do this it looks good from an academic sense, but it is frustrating from an industry perspective because the workload may not be known upfront or the one must cherry-pick an algorithm for a bespone, consistent pattern. I find it more interesting when algorithm designers try to stress their work and discuss how&#x2F;why it fails or succeeds. That gives more confidence to those of us who want to implement and use the work in a practical setting.
评论 #36447119 未加载
vlasky将近 2 年前
I wonder how difficult it would be to enhance Redis to support FIFO with Lazy Promotion and Quick Demotion.
评论 #36447811 未加载
1a1a11a将近 2 年前
slides here <a href="https:&#x2F;&#x2F;jasony.me&#x2F;slides&#x2F;hotos23-qdlp.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;jasony.me&#x2F;slides&#x2F;hotos23-qdlp.pdf</a>
tedunangst将近 2 年前
Lazy promotion and quick demotion sound similar to what&#x27;s done in 2Q. <a href="https:&#x2F;&#x2F;www.vldb.org&#x2F;conf&#x2F;1994&#x2F;P439.PDF" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.vldb.org&#x2F;conf&#x2F;1994&#x2F;P439.PDF</a>
评论 #36447138 未加载
almost_usual将近 2 年前
This just sounds like a variation of 2Q?
评论 #36447151 未加载
flopriore将近 2 年前
What about Belady&#x27;s anomaly?
评论 #36447192 未加载