I'm accustomed to operating in a fixed location with a large amount of data on local drives, but I increasingly find this "data estate" an obstacle to a more nomadic way of operating.<p>Carrying everything around with you isn't practical, even without the risk of storage devices being lost, so cloud storage seems essential. At the same time, I'm not sure there's really any cloud solution that can replicate the important properties and characteristics of local storage.
Then there's the need for offline access to subsets of one's data, for example while on a flight - but you also need a way to synchronise any changes made to that data once you have connectivity again. I also consider client-side encryption essential.<p>There's a million different notions I've considered, including network filesystems combined with encrypting FUSE layers like gocryptfs, network block devices perhaps with some kind of local caching layer and a normal filesystem mounted on it, etc., or filesystems (or block devices) built on log-structured databases on object storage, etc.<p>The latter has options like S3QL which I'm currently investigating. Interestingly the author of S3QL has also investigated using a network block device with ZFS: http://www.rath.org/zfs-on-nbd-my-verdict.html<p>Ultimately I've come to the conclusion there's no real ideal one-size-fits-all solution out there, but using conventional filesystems on a network block device is probably a bad idea as these filesystems are designed under the assumption that block devices are reliable and don't randomly go away due to intermittent network connectivity. NFS is probably a similar story.<p>My current compromise is to recognise that only a very small amount of my non-Git-repository data (<1GB) is truly essential ("must never be lost") or changes frequently. Whereas most large datasets don't change after creation (e.g. photos, videos, music), so S3QL seems like a plausible solution. In other words using different solutions for different classes of data and manually managing those different 'classes' of data.<p>The biggest feature gap is probably in offline access and locking/concurrency control. It would be nice to have some filesystem where I can run a command to guarantee a subdirectory will be available offline. But it would need to be read-write access, which raises questions of what happens if the write cache is lost (e.g. due to a laptop being lost) while offline or halfway during its being flushed. But for now I've come to the conclusion that general solutions to this problem are too hard, and it's probably just going to have to be managed manually using rsync on a manual, human intelligence-guided basis. There are Dropbox-like solutions like Nextcloud, but ultimately these have a naive file-based synchronisation mechanism and it's reportedly not that unusual for them to get things wrong; it seems obvious these would fall apart totally if you tried to synchronise a Firefox profile directory or something.<p>What are others using to solve these kinds of problem on the move?
I've been trying to do all that without cloud synchronization.<p>Currently, I have only three devices, so I get away with ´rsync´ing directories when at home. Firefox syncs over my Mozilla account so that's fire-and-forget.<p>It's far from perfect or hassle-free, but I don't want to pay for yet another subscription if I can do without it.