Layering stuff for legacy reasons isn't anything new. It was a smart idea to connect flat digital displays using ADC and VGA cable and display adapter with DAC. Still many did it, and some people even doing it today. It doesn't still make any sense what so ever.
The MLC and SLC NAND trends in figure 1 are confusing me. Historically, wasn't SLC first? Yet the graph shows pricing for MLC back to 2001, and SLC back to only 2007-ish. It correctly shows that MLC is less expensive than SLC.<p>Maybe he didn't have old price data for SLC?
The problem is that most users (home & enterprise) just want things to work, they don't really care much how to get there and to have the best efficiency.<p>It won't be too hard to have a good filesystem that works over raw NAND flash but it will not work on older OSes, it will not work in the enterprise storage market and so there will be less buyers and thus it will cost more so no one will buy it and it will not be made.<p>Even the enterprise storage folks just want the damn flash devices to just work without the storage folks doing anything with them. It's taken to extremes sometimes and the flash vendors just do whatever they are told since there is a lot of market in whatever the software-defined engineers want. Except the engineers mostly want to deal with high level algorithms and to brag how fast their algorithm is without really thinking about the hardware. Hardware is hard. Besides they can do something with the hardware that is already on the market rather than envision something better.<p>TL;DR unless someone will hold the stick at both ends (software and hardware) no one will make a reduced layer solution.
> Again, this approach today requires a vendor that can assert broad control over the whole system—from the file system to the interface, controller, and flash media.<p>Apple would be well-positioned here if they still cared about their Macs. HFS is due for a replacement anyway after 30 years. (it could be done on iOS devices too, but flash I/O performance doesn't seem to be a the major bottleneck for those uses)
Not quite sure what this article is talking about:<p><a href="https://en.wikipedia.org/wiki/List_of_file_systems#File_systems_optimized_for_flash_memory.2C_solid_state_media" rel="nofollow">https://en.wikipedia.org/wiki/List_of_file_systems#File_syst...</a><p>I personally believe log-based file systems are a perfect match due to never saving the same file repeatedly to the same location (so provides built in wear-leveling) and one can optimize writes by always clearing the head of the log for the next write.
The memory hierarchy needs to be revised to take into account the different performance characteristics of Flash RAM vs. hard drives. There is no disputing that NAND Flash SSD are very different from Dynamic RAM, static RAM, and HD.
Wouldn't it make sense to use an object storage style interface to SSDs? Instead of managing sectors and cylinders the SSD would provide interface for managing objects, pretty much like cloud storage services like S3.
<i>"Layering the file system translation on top of the flash translation is inefficient and impedes performance."<p>"For many years SSDs were almost exclusively built to seamlessly replace hard drives; they not only supported the same block-device interface"</i><p>The point of storage is to be able to put anything you want on it. That contract <i>is</i> the block interface, and includes the ability to change the filesystem. A file with internal structures is also a filesystem. The interfaces are fine. Change for change's sake should be avoided. (Providing a bypass, SSD-optimized interface is fine, but, ahem: "put down the crack pipes"... <a href="https://news.ycombinator.com/item?id=5541063" rel="nofollow">https://news.ycombinator.com/item?id=5541063</a> )