TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Personal data storage for digital nomads?

4 pointsby hlandauover 1 year ago
I&#x27;m accustomed to operating in a fixed location with a large amount of data on local drives, but I increasingly find this &quot;data estate&quot; an obstacle to a more nomadic way of operating.<p>Carrying everything around with you isn&#x27;t practical, even without the risk of storage devices being lost, so cloud storage seems essential. At the same time, I&#x27;m not sure there&#x27;s really any cloud solution that can replicate the important properties and characteristics of local storage. Then there&#x27;s the need for offline access to subsets of one&#x27;s data, for example while on a flight - but you also need a way to synchronise any changes made to that data once you have connectivity again. I also consider client-side encryption essential.<p>There&#x27;s a million different notions I&#x27;ve considered, including network filesystems combined with encrypting FUSE layers like gocryptfs, network block devices perhaps with some kind of local caching layer and a normal filesystem mounted on it, etc., or filesystems (or block devices) built on log-structured databases on object storage, etc.<p>The latter has options like S3QL which I&#x27;m currently investigating. Interestingly the author of S3QL has also investigated using a network block device with ZFS: http:&#x2F;&#x2F;www.rath.org&#x2F;zfs-on-nbd-my-verdict.html<p>Ultimately I&#x27;ve come to the conclusion there&#x27;s no real ideal one-size-fits-all solution out there, but using conventional filesystems on a network block device is probably a bad idea as these filesystems are designed under the assumption that block devices are reliable and don&#x27;t randomly go away due to intermittent network connectivity. NFS is probably a similar story.<p>My current compromise is to recognise that only a very small amount of my non-Git-repository data (&lt;1GB) is truly essential (&quot;must never be lost&quot;) or changes frequently. Whereas most large datasets don&#x27;t change after creation (e.g. photos, videos, music), so S3QL seems like a plausible solution. In other words using different solutions for different classes of data and manually managing those different &#x27;classes&#x27; of data.<p>The biggest feature gap is probably in offline access and locking&#x2F;concurrency control. It would be nice to have some filesystem where I can run a command to guarantee a subdirectory will be available offline. But it would need to be read-write access, which raises questions of what happens if the write cache is lost (e.g. due to a laptop being lost) while offline or halfway during its being flushed. But for now I&#x27;ve come to the conclusion that general solutions to this problem are too hard, and it&#x27;s probably just going to have to be managed manually using rsync on a manual, human intelligence-guided basis. There are Dropbox-like solutions like Nextcloud, but ultimately these have a naive file-based synchronisation mechanism and it&#x27;s reportedly not that unusual for them to get things wrong; it seems obvious these would fall apart totally if you tried to synchronise a Firefox profile directory or something.<p>What are others using to solve these kinds of problem on the move?

1 comment

Hackbratenover 1 year ago
I&#x27;ve been trying to do all that without cloud synchronization.<p>Currently, I have only three devices, so I get away with ´rsync´ing directories when at home. Firefox syncs over my Mozilla account so that&#x27;s fire-and-forget.<p>It&#x27;s far from perfect or hassle-free, but I don&#x27;t want to pay for yet another subscription if I can do without it.