What I found building multiplayer editors at scale is that it's very easy to very quickly overcomplicate this. For example, once you get into pub/sub territory, you have a very complex infrastructure to manage, and if you're a smaller team this can slow down your product development a lot.<p>What I found to work is:<p>Keep the data you wish multiplayer to operate on atomic. Don't split it out into multiple parallel data blobs that you sometimes want to keep in sync (e.g. if you are doing a multiplayer drawing app that has commenting support, keep comments inline with the drawings, don't add a separate data store). This does increase the size of the blob you have to send to users, but it dramatically decreases complexity. Especially once you inevitably want versioning support.<p>Start with a simple protocol for updates. This won't be possible for every type of product, but surprisingly often you can do just fine with a JSON patching protocol where each operation patches properties on a giant object which is the atomic data you operate on. There are exceptions to this such as text, where something like CRDTs will help you, but I'd try to avoid the temptation to make your entire data structure a CRDT even though it's theoretically great because this comes with additional complexity and performance cost in practice.<p>You will inevitably need to deal with getting all clients to agree on the order in which operations are applied. CRDTs solve this perfectly, but again have a high cost. You might actually have an easier time letting a central server increment a number and making sure all clients re-apply all their updates that didn't get assigned the number they expected from the server. Your mileage may vary here.<p>On that note, just going for a central server instead of trying to go fully distributed is probably the most maintainable way for you to work. This makes it easier to add on things like permissions and honestly most products will end up with a central authority. If you're doing something that is actually local-first, then ignore me.<p>I found it very useful to deal with large JSON blobs next to a "transaction log", i.e. a list of all operations in the order the server received them (again, I'm assuming a central authority here). Save lines to this log immediately so that if the server crashes you can recover most of the data. This also lets you avoid rebuilding the large JSON blob on the server too often (but clients will need to be able to handle JSON blob + pending updates list on connect, though this follows naturally since other clients may be sending updates while they connect).<p>The trickiest part is choosing a simple server-side infrastructure. Honestly, if you're not a big company, a single fat server is going to get you very far for a long time. I've asked a lot of people about this, and I've heard many alternatives that are cloud scale, but they have downsides I personally don't like from a product experience perspective (harder to implement features, latency/throughput issues, possibility of data loss, etc.) Durable Objects from Cloudflare do give you the best from both worlds, you get perfect sharding on a per-object (project / whatever unit your users work on) basis.<p>Anyway, that's my braindump on the subject. The TLDR is: keep it as simple as you can. There are a lot of ways to overcomplicate this. And of course some may claim I am the one overcomplicating things, but I'd love to hear more alternatives that work well at a startup scale.