If one HTTP request is split into many serial database queries (E.g 1:10, request:db-queries), it doesn’t make sense to have the request hit the low latency edge worker, and then send 10x the queries to a faraway database server as you are incurring the cost of latency for each db query.<p>In this case you may as well run an HTTP server next to your database and incur the high http request latency once, and have < 1ms latency for each of the 10 queries.<p>Is this correct?
I'd personally prefer if Cloudflare spent their time developing <i>something</i> like a global relational database that was wire compatible with Postgres.<p>My reasoning is that all their offerings are geared toward the edge, and while it's nice to support relational databases as they currently are, it seems to counter why you'd build on Cloudflare's edge to begin with.<p>This seems more to be suggesting that I should not pursue an edge-based architecture and stick with containers instead.<p>If I'm going to be using workers, I'm already giving up a ton of capabilities possible on a native OS. In return, I'd hope for something wild, like a fire and forget global relational database.<p>If my original ask was too tall an order, I think a global GraphQL service would fit nicer with their current stack and offer most of the same benefits as direct database access (i.e. schema as a service accessed over http).
I'm a little confused as to why someone would use this.<p>There are a lot of benefits to keeping an entire instance of a service in one DC. BW is plentiful, latency is low, and a whole host of tools are at your disposal. Of course, there are downsides too, mainly latency to users and sensitivity to DC outages.<p>On the other hand, running things at the edge basically has the opposite tradeoffs. Good latency to users and tolerance to DC outages, but hard to rely on everything "behind" the service to be running in the same cluster. And fewer tools support this deployment environment.<p>For their past compute/storage offerings, Cloudflare seemed to be pursuing the latter while trying to minimize the downsides. I think this is a good strategy since it offers value without directly competing with the major clouds.<p>But this seems to combine the downsides of both approaches (localize to a DC and run on the edge) while offering none of the advantages. It seems better (in this case) to use the edge as a proxy and just run the code in the DC.<p>I'm curious, what's the intent here? To transition people from AWS Lambda to CF Workers?