TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

FIFO queues are all you need for cache eviction

38 点作者 azeemba超过 1 年前

4 条评论

avidiax超过 1 年前
&gt; Can we tune the adaptive algorithms to work better?<p>&gt; We have also experimented with using an adaptive algorithm to change the size of FIFO queues in S3-FIFO. The results show that using an adaptive algorithm can improve the tail performance, but degrade overall performance. We find that tuning the adaptive algorithm is very challenging.<p>&gt; In fact, adaptive algorithms all have many parameters. For example, queue resizing requires several parameters, e.g., the frequency of resizing, the amount of space moved each time, the lower bound of queue sizes, and the threshold for trigger resizing.<p>&gt; Besides the many hard-to-tune parameters, adaptive algorithms adapt based on observation of the past. However, the past may not predict the future. We find that small perturbations in the workload can cause the adaptive algorithm to overreact. It is unclear how to balance between under-reaction and overreaction without introducing more parameters. Moreover, some adaptive algorithms implicitly assume that the miss ratio curve is convex because following the gradient direction leads to the global optimum. However, the miss ratio curves of scan-heavy workloads are often not convex.<p>Maybe this is where further research should go.<p>* Can we reduce the dimensionality of the parameters for more complicated algorithms? That would mean taking <i>n</i> parameters and mapping them to 1 or 2 metaparameters that are imperfect, but maintain a continuous and near linear performance tradeoff throughout their range. Then adaptive algorithms are easier to reason about.<p>* Knowledge of the past is both a shortcoming and potential strength for adaptation. Ex: what if we can learn a weight for every hour of the day, every day of the week, every day of the month, the name of every job, etc.? This gives you fast reaction based on history, and gradient ascent can handle the rest.
colonelxc超过 1 年前
From the website (<a href="https:&#x2F;&#x2F;s3fifo.com&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;s3fifo.com&#x2F;</a>), it claims that it needs no locking (backing scalability claims). This seems like an important part of their work too, unless I&#x27;ve missed some obvious trick that everyone uses. Naively, I would think that you can&#x27;t update a hash table (to find the cache items efficiently?) and the queues at the same time without a lock. They surely aren&#x27;t doing a linear search through the queue looking for a match
评论 #37301463 未加载
评论 #37301280 未加载
withinboredom超过 1 年前
This is quite brilliant and it was interesting to see some similarities to FASTER which uses a similar-ish method for a WAL.
kccqzy超过 1 年前
This reminds me of generational garbage collectors. They typically have a nursery where new objects are allocated in, and a small number of generations, whereby every object that survives a garbage collection is promoted to an older generation. Here we also have essentially two queues for that.