I like the idea. Making it backwards compatible with FAT means that, in principle, regular FAT filesystem implementations could be transparently changed to support big fat files (hehe) transparently.<p>However, reading the spec, it doesn't look fully backwards compatible? It seems like there are file structures which are possible to represent in FAT which aren't possible to represent in BigFAT. In FAT, I could have a 4GB-128kB size file called "hello.txt", and next to it, a file called "hello.txt.000.BigFAT". A FAT filesystem will show this as intended, but a BigFAT implementation will show it as one file' "hello.txt". That makes this a breaking change.<p>I would kind of have hoped that they had found an unused but always-zero bit in some header which could be repurposed to identify whether a file has a continuation or not, or some other clever way of ensuring that you can represent all legal FAT32 file structures.
>Unfortunately, exFAT has been adopted by the SD Association as the default file system for SDXC cards larger than 32 GB. In our view, this should never have happened, as it forces anyone who wants to access SDXC cards to get a license from Microsoft, basically making this a field owned by Microsoft.<p>So, this is a bit of a cultural/perception gap between FOSS developers and standards bodies. Most standards bodies have a patent policy of "as long as all the standards-essential patents are licensable for a uniform fee, we're good". Convincing patent holders to not extract royalties from their patents for the sake of easing the lives of FOSS implementers is much, much harder[0].<p>Microsoft isn't even the only SEP holder for SD, and the standard makes no attempt at being a royalty-free standard. In fact, early SD standards were NDA'd[1] and prohibited FOSS implementation <i>at all</i>.<p>[0] In fact, so hard that the EU has a conspiracy theory that Google/AOM bullied a patent holder into doing this<p>[1] Remember, SD cards were basically MMC with primitive DRM
Are the exFAT patents still a problem nowadays?<p>> exFAT was a proprietary file system until 2019, when Microsoft released the specification and allowed OIN members to use their patents.<p><a href="https://en.wikipedia.org/wiki/ExFAT#Legal_status" rel="nofollow">https://en.wikipedia.org/wiki/ExFAT#Legal_status</a>
> Why not exFAT... Microsoft owns several patents, and anyone who implements or uses exFAT technology needs Microsoft's permission, which typically also includes paying fees to Microsoft.<p>While BigFAT not being encumbered by any patents is a good thing, the camera industry have pretty much standardized on exFAT for their removable file storage format. Something I'm curious about is how a 5GB video file (quite common and actually on the smaller size for 4K and 8K recording sessions) is written and accessed between the two file systems. BigFAT says that the file would be written in 4GB chunks; is there something similar happening with exFAT or is the file "one chunk?" (Apologies if I have the terms wrong -- I'm not a filesystem expert.) The author laments that the exFAT format has been adopted for SDXC cards but given who all is in this group and what their use cases are I can discount "because Microsoft strong-armed them" as a reason for them selecting it.
The question I have is, why Segger? When I saw this I was like "the debugger company?!?!" Clearly this wouldn't fall under their business, so it makes sense for them to open it up, but why did they do it in the first place?
I'm a bit puzzled as to how split files with name standardization is an "extension." It seems to me that SEGGER is simply proposing a de facto file naming convention, and offering a few free tools (including a few abstraction drivers) to encourage adoption.<p>Can somebody fill me in, here- where's the value in what SEGGER is proposing, as opposed to what the entire IT community has already been doing for decades?
Is this only compatible with FAT32, or is it also compatible with FAT12/16? It would be very cool if this would support floppy disks.<p>Regarding the format, once you convert it, does the target device need to have a driver to support the format? It mentions that this would allow for > 4GB files for TVs, but these are typically non-updated very out of date OSes.<p>I think MS missed a trick by not making the boot sector also contain a simplistic driver, although it would have been a push to keep it all down at 512 bytes.
Awesome concept, especially for academia ... but is there a value proposition?<p>I love seeing this, don't get me wrong. I am just curious is there are any real world applications for this?
Would it not be possible to create a filesystem with modern capabilities but with backwards compatibility with FAT? Why can't we just have "legacy" commands built into the ReFS filesystem that process any FAT filesystem access?<p>I'm very ignorant to this but I'd love some insight from someone vastly more knowledgeable than me.
Is the big file handling seemless? If not why not just split files and use regular FAT32.<p>And what about converting FAT32 to a linux partition? Or buy a new
disk and move data over to that.<p>Edit: it is a genuine question. downvote implies not but honestly it is.