TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Relational Database Connectors for Cloudflare Workers

49 点作者 elithrar超过 3 年前

3 条评论

justsomeuser超过 3 年前
If one HTTP request is split into many serial database queries (E.g 1:10, request:db-queries), it doesn’t make sense to have the request hit the low latency edge worker, and then send 10x the queries to a faraway database server as you are incurring the cost of latency for each db query.<p>In this case you may as well run an HTTP server next to your database and incur the high http request latency once, and have &lt; 1ms latency for each of the 10 queries.<p>Is this correct?
评论 #29229481 未加载
lucasyvas超过 3 年前
I&#x27;d personally prefer if Cloudflare spent their time developing <i>something</i> like a global relational database that was wire compatible with Postgres.<p>My reasoning is that all their offerings are geared toward the edge, and while it&#x27;s nice to support relational databases as they currently are, it seems to counter why you&#x27;d build on Cloudflare&#x27;s edge to begin with.<p>This seems more to be suggesting that I should not pursue an edge-based architecture and stick with containers instead.<p>If I&#x27;m going to be using workers, I&#x27;m already giving up a ton of capabilities possible on a native OS. In return, I&#x27;d hope for something wild, like a fire and forget global relational database.<p>If my original ask was too tall an order, I think a global GraphQL service would fit nicer with their current stack and offer most of the same benefits as direct database access (i.e. schema as a service accessed over http).
评论 #29228601 未加载
uluyol超过 3 年前
I&#x27;m a little confused as to why someone would use this.<p>There are a lot of benefits to keeping an entire instance of a service in one DC. BW is plentiful, latency is low, and a whole host of tools are at your disposal. Of course, there are downsides too, mainly latency to users and sensitivity to DC outages.<p>On the other hand, running things at the edge basically has the opposite tradeoffs. Good latency to users and tolerance to DC outages, but hard to rely on everything &quot;behind&quot; the service to be running in the same cluster. And fewer tools support this deployment environment.<p>For their past compute&#x2F;storage offerings, Cloudflare seemed to be pursuing the latter while trying to minimize the downsides. I think this is a good strategy since it offers value without directly competing with the major clouds.<p>But this seems to combine the downsides of both approaches (localize to a DC and run on the edge) while offering none of the advantages. It seems better (in this case) to use the edge as a proxy and just run the code in the DC.<p>I&#x27;m curious, what&#x27;s the intent here? To transition people from AWS Lambda to CF Workers?
评论 #29229598 未加载
评论 #29228561 未加载