TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Can Applications Recover from Fsync Failures?

59 pointsby simonz05almost 3 years ago

8 comments

formerly_provenalmost 3 years ago
IIRC Linux itself has only been reporting asynchronous writeback errors via fsync for a few short years, meaning before that basically any database that wasn't using O_DIRECT would miss I/O errors under memory pressure (or from out-of-process writebacks in general, e.g. root invoking sync). I looked into this stuff before postgres's fsyncgate, before "how are I/O errors actually handled in Linux, anyhow?" got attention, and walked away with the notion that anything other than O_DIRECT is best-effort-probably-works-most-of-the-time on a good day, and O_DIRECT's semantics are basically an unknowable opaque mixture of what drivers and hardware do and expect. There were some papers looking at error handling within Linux file systems at the time and they found a large number of issues in pretty much all of them. As far as I know, all efforts in the area of durable I/O are still focused on the notion of synchronizing I/O (fsync/fdatasync and equivalent), while many databases don't actually care about that too much and would rather want barriers instead. The kicker is of course that hardware (when honest) actually uses barriers and not block synchronization, and the databases that are journaling filesystems of course also use barriers and not synchronization to implement journaling. It struck me as a distinctly classic API-to-real-world mismatch.
评论 #32449604 未加载
eisalmost 3 years ago
After decades of issues with the storage layer and even some of the most popular programs written by top notch developers having bugs due to the problematic nature of the APIs and filesystems involved I wish a completely new storage API would emerge. Something that exposes an asynchronous (and synchronous build upon it) API with ACID semantics. Filesystems are nothing more than specialized databases but they don&#x27;t expose the necessary interface to use them as such.<p>We need an API that is dead simple and hard to misuse with clearly defined semantics and guarantees but lets seasoned developers still exploit the hardware to its fullest with additional work. Hope dies last I guess :)
评论 #32449446 未加载
评论 #32449194 未加载
评论 #32451979 未加载
评论 #32449281 未加载
评论 #32451236 未加载
CGamesPlayalmost 3 years ago
&gt; all three file systems mark pages clean after fsync fails, rendering techniques such as application-level retry ineffective. However, the content in said clean pages varies depending on the file system; ext4 and XFS contain the latest copy in memory while Btrfs reverts to the previous consistent state. Failure reporting is varied across file systems; for example, ext4 data mode does not report an fsync failure immediately in some cases, instead (oddly) failing the subsequent call. Failed updates to some structures (e.g., journal blocks) during fsync reliably lead to file-system unavailability. And finally, other potentially useful behaviors are missing; for example, none of the file systems alert the user to run a file-system checker after the failure.<p>Surely there&#x27;s some motivations behind these behaviors and it&#x27;s not a bug that was implemented in all 3 filesystems, right?
评论 #32448859 未加载
评论 #32449356 未加载
chrsigalmost 3 years ago
On macOS, most likely not[0].<p>from the macOS fsync manpage:<p>&gt; fsync() causes all modified data and attributes of fildes to be moved to a permanent storage device. This normally results in all in-core modified copies of buffers for the associated file to be written to a disk.<p>&gt; Note that while fsync() will flush all data from the host to the drive (i.e. the &quot;permanent storage device&quot;), the drive itself may not physically write the data to the platters for quite some time and it may be written in an out-of-order sequence.<p>&gt; Specifically, if the drive loses power or the OS crashes, the application may find that only some or none of their data was written. The disk drive may also re-order the data so that later writes may be present, while earlier writes are not.<p>&gt; This is not a theoretical edge case. This scenario is easily reproduced with real world workloads and drive power failures.<p>&gt; For applications that require tighter guarantees about the integrity of their data, Mac OS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC fcntl asks the drive to flush all buffered data to permanent storage. Applications, such as databases, that require a strict ordering of<p>&gt; writes should use F_FULLFSYNC to ensure that their data is written in the order they expect. Please see fcntl(2) for more detail.<p>[0] <a href="https:&#x2F;&#x2F;twitter.com&#x2F;marcan42&#x2F;status&#x2F;1494213855387734019" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;marcan42&#x2F;status&#x2F;1494213855387734019</a>
xyzzy_plughalmost 3 years ago
I said this elsewhere but, in isolation there will always be failure scenarios where recovery is impossible. There are plenty of verification strategies to detect failures, and combined with redundancy, you can reduce the probability of application failure in the face of fsync failures or other similar failures. But you can never eliminate failures. If your storage gives up the ghost, it&#x27;s game over.<p>Distributed systems are the closest we&#x27;ve gotten to resilient, durable storage. Redundancy, external verification, quorum. Sometimes the distributed system lives in a single box on your desk.
simonz05almost 3 years ago
The paper analyzes how file systems and PostgreSQL, LMDB, LevelDB, SQLite, and Redis react to fsync failures. It shows that although applications use many failure-handling strategies, none are sufficient: fsync failures can cause catastrophic outcomes such as data loss and corruption.
评论 #32448674 未加载
评论 #32463075 未加载
iforgotpasswordalmost 3 years ago
&gt; Our findings show that although applications use many failure-handling strategies, none are sufficient: fsync failures can cause catastrophic outcomes such as data loss and corruption.<p>That makes it seem like an immediate abort might be the best action in most cases? Handling it wrong and then chugging along might amplify any corruption that has happened.<p>It might obviously depend on the application and use case, but I&#x27;d like to think projects like pgsql put a lot of effort into getting this right after fsyncgate. I&#x27;ve read quite a bit about it after that incident, but ultimately decided I&#x27;m too stupid to get that right and roll the &quot;log error and bail out&quot; route ever since.
评论 #32448920 未加载
评论 #32448422 未加载
评论 #32448864 未加载
hyc_symasalmost 3 years ago
The description of LMDB&#x27;s behavior and subsequent analysis are flat wrong. <a href="https:&#x2F;&#x2F;twitter.com&#x2F;hyc_symas&#x2F;status&#x2F;1558909442737012736" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;hyc_symas&#x2F;status&#x2F;1558909442737012736</a><p>To assume that any newbie has hit upon a potential failure condition that we didn&#x27;t already anticipate and account for in LMDB is frankly laughable.