TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

35% Faster Than the Filesystem (2017)

46 pointsby lopkeny12ko7 months ago

9 comments

dale_glass7 months ago
Out of curiosity, would cutting out the filesystem out of the equation entirely improve things further still? Just put the database on /dev/sda2 or a LV? Also, how much overhead does LVM have?
评论 #41956767 未加载
评论 #41957175 未加载
dehrmann7 months ago
Sounds like we should all just be doing sqlite3 &#x2F;dev&#x2F;sda2<p>Filesystems and databases solve similar problems, so putting one on top of the other is a bit redundant.
评论 #41956773 未加载
评论 #41956497 未加载
dang7 months ago
Related. Others?<p><i>SQLite: 35% Faster Than the Filesystem</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41085376">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41085376</a> - July 2024 (193 comments)<p><i>SQLite is 35% Faster Than The Filesystem (2017)</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27897427">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27897427</a> - July 2021 (127 comments)<p><i>35% Faster Than The Filesystem (2017)</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=20729930">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=20729930</a> - Aug 2019 (164 comments)<p><i>SQLite small blob storage: 35% Faster Than the Filesystem</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14550060">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14550060</a> - June 2017 (202 comments)<p>Since the July 2024 had a lot of attention, the current repost counts as a dupe. Reposts are ok after a year or so—this is in the FAQ: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsfaq.html">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsfaq.html</a>.
sgc7 months ago
Is anybody depending on this for mission critical data? Can I throw 1TB of images, pdfs, and miscellanea in there, delete the originals, and keep humming along in practical perpetuity? I would presume use lightstream to backup on top of other more general backup solutions, and use a few Attached Databases instead one monolithic for a bit more flexibility &#x2F; safety?
评论 #41956471 未加载
评论 #41956605 未加载
评论 #41956892 未加载
评论 #41957385 未加载
评论 #41956808 未加载
James_K7 months ago
I feel like some kind of compression is the way to go if you are interested in fast reads&#x2F;writes. Depending on the algorithm used, you could probably decompress the data faster than it can be read off the disk. In an archive file, you would also have the same benefit of fewer read&#x2F;write calls.
评论 #41956612 未加载
LtWorf7 months ago
It&#x27;s talking about comparing to reading the data from individual files. If you keep them in a single file and seek, the 35% no longer holds.<p>Read the post :)
bubblesnort7 months ago
ITT: people who never installed an RDBMS that used a storage device instead of a filesystem, such as Db2.
unwind7 months ago
Meta: (2017), it seems?
评论 #41956459 未加载
kardos7 months ago
If the speedup is due to fewer open&#x2F;close syscalls and space savings due to avoiding block padding, tar would achieve the same yes?
评论 #41956723 未加载