We're actually facing an issue with our Ceph infrastructure in the 'upgrade' from FileStore to BlueStore: the loss of use of our SSDs.<p>We created our infrastructure with a bunch of hardware that had HDDs for bulk storage and an SSD for async I/O and intent log stuff.<p>The problem is that BlueStore does not seem to have any use for off-to-the-side SSDs AFA(we)CT. So we're left with a bunch hardware that may not be as performant under the new BlueStore world order.<p>The Ceph mailing list consensus seems to be "don't buy SSDs, but rather buy more spindles for more independent OSDs". That's fine for future purchases, but we have a whole bunch of gear designed for the Old Way of doing things. We could leave things be and continue using FileStore, but it seems the Path Forward is BlueStore.<p>Some of us do not need the speed of an all-SSD setup, but perhaps want something a little faster than only-HDDs. We're playing with benchmarks now to see how much worse the latency is with BlueStore+no-SSD, and whether the latency is good enough for us as-is.<p>Any new storage design that cannot handle an "hybrid" configuration of combining HDDs and SSDs is silly IMHO.<p>I joked that we could tie the HDDs together using ZFS zvol, with the SSD as the ZIL, and point the OSD(s) there.