TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Monitoring latency: Cloudflare Workers vs Fly vs Koyeb vs Railway vs Render

120 点作者 elieskilled超过 1 年前

12 条评论

syrusakbary超过 1 年前
I would love to see Wasmer Edge in the next comparison!<p>A summary for the lazy readers:<p><pre><code> * Cloudflare workers outperforms the rest by a big margin (~80ms avg) * Fly.io cold starts are not great for the Hono use case (~1.5s avg) * Koyeb wraps the requests behind Cloudflare to optimize latency (150ms best case) * Railway localized the app in one region (80ms best case, ~400ms rest) * Render has some challenges scaling from cold (~600ms avg) </code></pre> In my opinion, this shows that all platform providers that uses Docker containers under the hood (Fly, Koyeb, Railway, Render) only achieve good cold starts by never shutting down the app. The ones that they do, can only achieve ~600ms startup times at best.
评论 #39436647 未加载
mtlynch超过 1 年前
I was surprised to see such miserable measured latency to Fly, but then I saw this note:<p>&gt;<i>The primary region of our server is Amsterdam, and the fly instances is getting paused after a period of inactivity.</i><p>After they configured Fly to run nonstop, it outperformed everyone by 3x. But it seems like they&#x27;re running the measurement from Fly&#x27;s infrastructure, which biases the results in Fly&#x27;s favor.<p>Also weird that they report p75, p90, p95, p99, but not median.
评论 #39436699 未加载
评论 #39438202 未加载
评论 #39439310 未加载
joshstrange超过 1 年前
Very odd that AWS Lambda&#x2F;Google Cloud Functions weren&#x27;t tested. Those CF numbers are impressive though, they beat Lambda cold start by a mile.
评论 #39436633 未加载
评论 #39435431 未加载
评论 #39434550 未加载
anurag超过 1 年前
(Render CEO) Our free services are meant for personal hobby projects that don&#x27;t need to stay up all the time; I&#x27;d love to see tests (and uptime monitoring) for the $7&#x2F;mo server on Render. Happy to give you credits if it helps.
评论 #39441590 未加载
评论 #39436234 未加载
richardkeller超过 1 年前
OP&#x27;s note about Johannesburg&#x27;s latency is something I&#x27;ve noticed over the past few weeks in particular. Our servers are hosted in South Africa, yet accessing most of our sites and services from within South Africa causes traffic to be re-routed via other nodes, mostly London (LHR). This is easy to verify by appending cdn-cgi&#x2F;trace onto a Cloudflare-proxied domain.<p>Something is definitely up with Cloudflare&#x27;s Johannesburg data centre. On particularly bad days, TTFB routinely reaches 1-3 seconds. Bypassing Cloudflare immediately drops this to sub 100ms.<p>In the past, I would have emailed support@cloudflare.com, but it seems that this channel is no longer available for free tier users. What is the recommended approach these days for reporting issues such as this?
评论 #39437475 未加载
评论 #39436658 未加载
评论 #39435952 未加载
mxstbr超过 1 年前
I feel like this title is misleading compared to the original article. (cc @dang) Fly.io without cold starts (which is a one-line configuration change) is 2x faster than Cloudflare Workers.
评论 #39434883 未加载
评论 #39434733 未加载
评论 #39436260 未加载
评论 #39434963 未加载
tmikaeld超过 1 年前
On which cloudflare plan though? On free plan, eu visits are constantly routed through Estonia and Russia in our case, causing about 1-5 sec ttfb
willsmith72超过 1 年前
im curious what the results would be with a more production-like app<p>e.g. if you add prisma connecting to postgres, presumably there&#x27;s extra latency to create the client. for the fly app, you have a server reusing the client while it&#x27;s warm. presumably for the cloudflare worker, you&#x27;re recreating the client per request, but im not 100% on that. how would the latency change then for cold vs warm, and on the other platforms?
评论 #39436243 未加载
skybrian超过 1 年前
I’d be curious how Deno Deploy does.
_visgean超过 1 年前
I wonder how much the open status server allocation plays a role in this case - they tested from 6 different location but its not clear if fpr example openstatus servers are in closer datacenters.
评论 #39434879 未加载
elieskilled超过 1 年前
Curious what people think of this. Seems like a huge difference. Much larger than I expected.
catlifeonmars超过 1 年前
Those Fly.io p99 latencies are atrocious. 2.6s P99 compared to CloudFlare 1.0s. Neither one seems particularly great at first glance, but the CloudFlare worker latency does seem on par with Lambda from previous experience (I have not tested Lambda@Edge or CloudFront Functions).
评论 #39443711 未加载