TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Relational Database Connectors for Cloudflare Workers

49 pointsby elithrarover 3 years ago

3 comments

justsomeuserover 3 years ago
If one HTTP request is split into many serial database queries (E.g 1:10, request:db-queries), it doesn’t make sense to have the request hit the low latency edge worker, and then send 10x the queries to a faraway database server as you are incurring the cost of latency for each db query.<p>In this case you may as well run an HTTP server next to your database and incur the high http request latency once, and have &lt; 1ms latency for each of the 10 queries.<p>Is this correct?
评论 #29229481 未加载
lucasyvasover 3 years ago
I&#x27;d personally prefer if Cloudflare spent their time developing <i>something</i> like a global relational database that was wire compatible with Postgres.<p>My reasoning is that all their offerings are geared toward the edge, and while it&#x27;s nice to support relational databases as they currently are, it seems to counter why you&#x27;d build on Cloudflare&#x27;s edge to begin with.<p>This seems more to be suggesting that I should not pursue an edge-based architecture and stick with containers instead.<p>If I&#x27;m going to be using workers, I&#x27;m already giving up a ton of capabilities possible on a native OS. In return, I&#x27;d hope for something wild, like a fire and forget global relational database.<p>If my original ask was too tall an order, I think a global GraphQL service would fit nicer with their current stack and offer most of the same benefits as direct database access (i.e. schema as a service accessed over http).
评论 #29228601 未加载
uluyolover 3 years ago
I&#x27;m a little confused as to why someone would use this.<p>There are a lot of benefits to keeping an entire instance of a service in one DC. BW is plentiful, latency is low, and a whole host of tools are at your disposal. Of course, there are downsides too, mainly latency to users and sensitivity to DC outages.<p>On the other hand, running things at the edge basically has the opposite tradeoffs. Good latency to users and tolerance to DC outages, but hard to rely on everything &quot;behind&quot; the service to be running in the same cluster. And fewer tools support this deployment environment.<p>For their past compute&#x2F;storage offerings, Cloudflare seemed to be pursuing the latter while trying to minimize the downsides. I think this is a good strategy since it offers value without directly competing with the major clouds.<p>But this seems to combine the downsides of both approaches (localize to a DC and run on the edge) while offering none of the advantages. It seems better (in this case) to use the edge as a proxy and just run the code in the DC.<p>I&#x27;m curious, what&#x27;s the intent here? To transition people from AWS Lambda to CF Workers?
评论 #29229598 未加载
评论 #29228561 未加载