<i>The NTFS features we have chosen to not support in ReFS are: named streams, object IDs, short names, compression, file level encryption (EFS), user data transactions, sparse, hard-links, extended attributes, and quotas.</i><p>Of these, I'm sorry to see the demise of sparse files. This was, IMHO, the single most under-utilized feature of NTFS, and I was able to integrate support for sparse files into a number of clients' applications (I'm a low-level consultant and developer) to great effect. While the increasing size of volumes along with the sub-par utilization of this feature makes it an obvious victim when creating a new filesystem and looking for features to drop, sparse files can be amazing for other reasons.<p>One of the advantages of sparse files is that they can be used to naively support certain seek-related behaviors. If you create the file right, you can save yourself a lot of code and complexity in any applications consuming that data.<p>The biggest advantage of sparse files though is speed. For instance, you can create a container file of X size filled with zero bytes, and only use as much space as the end application requests (for example, creating a virtual disk of 2TB that only takes up 100MB on disk).<p>I, for one, am sad to see this feature go. For anyone interested in this amazing feature, have a read here: <a href="http://www.flexhex.com/docs/articles/sparse-files.phtml" rel="nofollow">http://www.flexhex.com/docs/articles/sparse-files.phtml</a>
Some loose remarks:<p>- named streams are out => it becomes unlikely that we will see these become popular on any OS (because being incompatible with the market leader is problematic; see Mac OS X, .DS_Store). I find that a pity.<p>- I guess quotas are out because there will be something else replacing it?<p>- Can anyone explain why a modern filesystem should have a limitation on path length? For APIs I can understand it because the standard C library thinks paths are fixed-length, but for file systems? I would think this complicates the implementation, as every directory would need to know the length of the deepest path below it (in case one attempts to rename it). Aggregating that info upwards whenever a file is created or renamed (let alone deleted) cannot come for free, can it?
sounds very much like the "current generation" to me, ZFS has done just about everything that article covers for a while, and it supports most of this too:<p>"The NTFS features we have chosen to not support in ReFS are: named streams, object IDs, short names, compression, file level encryption (EFS), user data transactions, sparse, hard-links, extended attributes, and quotas."
He closes with:
"We believe this significantly advances our state of the art for storage."<p>I don't think that's true at all. As others have mentioned, it appears they are matching the state of art achieved by ZFS.
I'm not sure I see the difference between a log-structured file system and what they have proposed for their robust disk update strategy, especially when you add integrity streams into the picture. Anyone with more filesystems knowledge than me want to clarify this?
Seems very cool, the only problem seems to be that it isn't bootable. I hope that this might get the Linux folks a bit more serious about modern resilient filesystems.
Wasn't it supposed to arrive in Vista?<p>Now, seriously, if I got a dollar for every new Windows filesystem announced for every next version of Windows and canned before launch, I'd be at least five dollars richer. By the time they deliver it, IF they deliver it, BtrFS will be widely used in Windows servers. ZFS already is way more advanced than what they propose.<p>The only major change I saw was when Microsoft ditched HPFS to go with NTFS.