> Can we tune the adaptive algorithms to work better?<p>> We have also experimented with using an adaptive algorithm to change the size of FIFO queues in S3-FIFO. The results show that using an adaptive algorithm can improve the tail performance, but degrade overall performance. We find that tuning the adaptive algorithm is very challenging.<p>> In fact, adaptive algorithms all have many parameters. For example, queue resizing requires several parameters, e.g., the frequency of resizing, the amount of space moved each time, the lower bound of queue sizes, and the threshold for trigger resizing.<p>> Besides the many hard-to-tune parameters, adaptive algorithms adapt based on observation of the past. However, the past may not predict the future. We find that small perturbations in the workload can cause the adaptive algorithm to overreact. It is unclear how to balance between under-reaction and overreaction without introducing more parameters. Moreover, some adaptive algorithms implicitly assume that the miss ratio curve is convex because following the gradient direction leads to the global optimum. However, the miss ratio curves of scan-heavy workloads are often not convex.<p>Maybe this is where further research should go.<p>* Can we reduce the dimensionality of the parameters for more complicated algorithms? That would mean taking <i>n</i> parameters and mapping them to 1 or 2 metaparameters that are imperfect, but maintain a continuous and near linear performance tradeoff throughout their range. Then adaptive algorithms are easier to reason about.<p>* Knowledge of the past is both a shortcoming and potential strength for adaptation. Ex: what if we can learn a weight for every hour of the day, every day of the week, every day of the month, the name of every job, etc.? This gives you fast reaction based on history, and gradient ascent can handle the rest.