IIRC Linux itself has only been reporting asynchronous writeback errors via fsync for a few short years, meaning before that basically any database that wasn't using O_DIRECT would miss I/O errors under memory pressure (or from out-of-process writebacks in general, e.g. root invoking sync). I looked into this stuff before postgres's fsyncgate, before "how are I/O errors actually handled in Linux, anyhow?" got attention, and walked away with the notion that anything other than O_DIRECT is best-effort-probably-works-most-of-the-time on a good day, and O_DIRECT's semantics are basically an unknowable opaque mixture of what drivers and hardware do and expect. There were some papers looking at error handling within Linux file systems at the time and they found a large number of issues in pretty much all of them. As far as I know, all efforts in the area of durable I/O are still focused on the notion of synchronizing I/O (fsync/fdatasync and equivalent), while many databases don't actually care about that too much and would rather want barriers instead. The kicker is of course that hardware (when honest) actually uses barriers and not block synchronization, and the databases that are journaling filesystems of course also use barriers and not synchronization to implement journaling. It struck me as a distinctly classic API-to-real-world mismatch.