Some other interesting points:<p>- The write api is sync, but it has a hidden async await: when you do your next output with a response, if the write fails the runtime will replace the response with a http failure. This allows the runtime to auto-batch writes and optimistically assume they will succeed, without the user explicitly handling the errors or awaits.<p>- There are no read transactions, which would be useful to get a pointer to a snapshot at a point in time.<p>- Each runtime instance is limited to 128mb RAM.<p>- Websockets can hibernate and you do not have to pay for the time they are sleeping. This allows your clients to remain connected even when the DO is sleeping.<p>- They have a kind of auto RPC ability where you can talk to other DOs or workers as if they are normal JS calls, but they can actually be calling another data center. The runtime handles the serialisation and parsing.
> ..each DO constantly streams a sequence of WAL entries to object storage - batched every 16MB or every ten seconds.<p>Which also means it may take 10 seconds before you can (reliably) read the write globally.<p>I keep failing to see how this can replace regionally placed database clusters which can serve a continent in milliseconds.<p>Edit: I know it uses streams, but those are only to 5 followers and CF have hundreds of datacenters. There is no physical way to guarantee reads in seconds unless all instances of the SQLite are always connected and even then, packet latency will cause issues.
One thing I don't understand about Durable Objects yet is where they are physically located.<p>Are they located in the region that hosted the API call that caused them to be created in the first place?<p>If so, is there a mechanism by which a DO can be automatically migrated to another location if it turns out that e.g. they were created in North America but actually all of the subsequent read/write traffic to them comes from Australia?
Does anyone else struggle to wrap their head around a lot of this new cloud stuff?<p>I have 15+ years experience of building for the web, using Laravel / Postgres / Redis stack and I read posts like this and just think, "not for me".
I really love the Durable Object design, particularly because it's easy to understand how it works on the inside. Unlike lots of other solutions designed for realtime data stuff, Durable Objects have a simplicity to them, much like Redis and Italian food. You can see all the ingredients. Given enough time and resources (and datacenters :) ), a competent programmer could read the DO docs and reimplement something similar. This makes it easy to judge the tradeoffs involved.<p>I do worry that DOs are great for building fast, low-overhead, realtime experiences (eg five people editing a document in realtime), but make it very hard to make analyses and overviews (which groups of people have been which editing documents the last week?). Putting the data inside SQLite might make that even harder - you'd have to somehow query lots and lots of little SQLite instances and then merge the results together. I wonder if there's anything for this with DOs, because this is what keeps bringing me back to Postgres time and time again: it works for core app features <i>and</i> for overviews, BI, etc.
This is a really interesting design, but these kinds of smart systems always inhabit an uncanny valley for me. You need them in exactly two cases:<p>1. You have a really high-load system that you need to figure out some clever ways to scale.<p>2. You're working on a toy project for fun.<p>If #2, fine, use whatever you want, it's great.<p>If this is production, or for Work(TM), you need something proven. If you don't know you <i>need</i> this, you don't need it, go with a boring Postgres database and a VM or something.<p>If you do know you <i>need</i> this, then you're kind of in a bind: It's not really very mature yet, as it's pretty new, and you're probably going to hit a bunch of weird edge cases, which you probably don't really want to have to debug or live with.<p>So, who are these systems for, in the end? They're so niche that they can't easily mature and be used by lots of serious players, and they're too complex with too many tradeoffs to be used by 99.9% of companies.<p>The only people I know for sure are the target market for this sort of thing is the developers who see something shiny, build a company (or, worse, build someone else's company) on it, and then regret it pretty soon and move to something else (hopefully much more boring).<p>Does anyone have more insight on this? I'd love to know.
I'm constantly impressed by the design of DOs. I think it's easy to have a knee-jerk reaction that something is wrong with doing it this way, but in reality I think this is exactly how a lot of real products are implicitly structured: a lot of complex work done at very low scale per atomic thing (by which I mean, anything that needs to be transactionally consistent).<p>In retrospect what we ended up building at Framer for projects with multiplayer support where edits are replicated at 60 FPS while being correctly ordered for all clients is a more applied version of what DOs are doing now. We also ended up with something like a WAL of JSON object edits so in case a project instance crashed its backup could pick up as if nothing had happened, even if committing the JSON patches into the (huge) project data object didn't have time to occur (on an every-N-updates/M-seconds basis just like described here).
This is probably a really stupid question, but how would one handle schema migrations with this kind of setup? My understanding is it's aimed at having a database per-tenant (or even more broken down than that). Is there a sane way of handling schema migrations, or is the expectation that these databases are more short-lived and so you support multiple versions of the db (DO) until it's deleted?<p>In my head, this would be a fun way to build a bookmark service with a DO per user. But as soon as you want to add a new field to an existing table, you meet a pretty tricky problem of getting that change to each individual DO. Perhaps that example is too long lived though, and this is designed for more ephemeral usage.<p>If anyone has any experience with this, I'd be really interested to know what you're doing.
Noticing CF pushing for devs to use DO for eveything over workers these days. Even websocket connections on workers get timed out after ~30s and the recommended way is to use DO for them
What I don’t understand is why, in the example of flight seat mapping provided, you create a DO per flight. So does a DO correspond to a “model” in MVC architecture? What if I used DOs in a per-tenant way, so one DO per user. And then how do I query or “join” across all DOs to find all full flights? I guess you would have to design your DOs such that joins are not required?
Durable objects seem so cool but the pricing always scares me. (Specifically, having to worry about getting hibernation right.) They’d be a great fit for our yjs document based strategy, but while everything in prod still works on plain ol redis and Postgres, it’s hard to justify an exploration.
Re <a href="https://where.durableobjects.live/" rel="nofollow">https://where.durableobjects.live/</a> — why the hell are they still operating in Russia?
I would love to work with Durable Objects and all the other cool stuff from Cloudflare, but I’m really hesitant to make a single cloud providers technology the backbone of my application. If CF decides to pull the plug, or charge a lot more, the only way to migrate elsewhere would be rebuilding the entire app.<p>As long as there aren’t any comparable technologies, or abstraction layers on top of DOs, I’m not going to make the leap of faith.
I'd love to know how they have hooked VFS with WAL to monitor changes. The SQLite's WAL layer deals with page numbers where as VFS deals with file and byte offsets. I am curious to understand how they mapped it, how they get new writes to the WAL and read from the WAL.