Basically, I've gotten coursework at my university to consider and start using a distributed file system for storing large amounts of crystal diffraction images. It would need to have multiple copies of the files distributed in case one of the servers goes down and be scalable as it will be always increasing. I've looked into things like LOCKSS[1] and IPFS[2] but LOCKSS seems to be limiting itself to storing articles and IPFS doesn't provide the data reliability in case one of the nodes goes down. Did anyone ever encounter a similar task and what did you use for that?<p>[1] https://www.lockss.org/
[2] https://ipfs.tech/
IPFS does provide data reliability with the use of pinning services, a private cluster, or cooperative cluster. It seems to be difficult how to communicate how IPFS works in this regard and there are a lot of misunderstandings about it. There are some people who want IPFS to be an infinite free hard drive in the sky with automatic replication and persistence till the end of time. (it is not). Then there are the people who worry that, "OMG someone can just put evil content onto my machine and I have to provide it!" (it does not)<p>IPFS makes it very easy to replicate content, but you don't have to replicate anything you don't want to. Resources cost money so you either have to ask someone to do it for free, and you get what you get as far as reliability, or you pay someone and you get better reliability so long as you keep paying.
This is private data right? Maybe a private bittorrent tracker with a few nodes which "grab everything" to ensure persistence. Never done it myself, but might be a direction worth researching ...
How much data do you have now?<p>How fast is it increasing?<p>What is your budget for hardware?<p>What is your budget for software?<p>What is your budget for development labor?<p>What is your budget for maintenance?<p>I mean the simplest thing that might work is talking to your university IT department...<p>...or calling AWS sales or another commercial organization specializing in these things.<p>The second most complicated thing you can do is to roll your own.<p>The most complicated thing you can do is to have someone else do it.<p>Good luck.
This is a simple task with NATS JetStream object storage <a href="https://docs.nats.io/nats-concepts/jetstream/obj_store/obj_walkthrough" rel="nofollow">https://docs.nats.io/nats-concepts/jetstream/obj_store/obj_w...</a>. Just provision a JetStream cluster and an object store bucket. If you want to span the cluster over multiple clouds with a supercluster, that’s an option as well.
Sounds like you’d want to setup a private multi org cloud storage system.<p>Something like this <a href="https://min.io/" rel="nofollow">https://min.io/</a> or similar. There are a dozen or so open source / commercial s3-like object storage systems out there.<p>I have a friend that does this kind of mission critical infrastructure for research universities.<p>Dm if you’d like
If you're replicating one primary file system to many secondary systems, MARS might be helpful[1]. It was developed by 1&1, who hosts my personal website, along with petabytes of other people's stuff.<p>[1] <a href="https://github.com/schoebel/mars">https://github.com/schoebel/mars</a>
I was thinking about SyncThing, <a href="https://github.com/syncthing/syncthing">https://github.com/syncthing/syncthing</a> but it's a file synchronization tool, meaning every node would have a full copy, and it would propagate deletes from one node to another.