OK...<p>So, I'm the tech lead for Cloudflare Workers. In complete honestly, I did not even know we ran some sort of comparison benchmark with Compute@Edge until Fastly complained about it, nor did I know about our ToS clause until Fastly complained about it. I honestly don't know anything about either beyond what's publicly visible.<p>But as long as we're already mud slinging, I'd like to take the opportunity to get a little something off my chest. Fastly has been trumpeting for years that Compute@Edge has 35 microsecond cold starts, or whatever, and repeatedly posting blog posts comparing that against 5 milliseconds for Workers, and implying that they are 150x faster. If you look at the details, it turns out that 35 microsecond time is actually how long they take to start a new request, given that the application is already loaded in memory. A hot start, not a cold start. Whereas Workers' 5ms includes time to load the application from disk (which is the biggest contributor to total time). Our hot start time is also a few microseconds, but that doesn't seem like an interesting number?<p>We never called this out, it didn't seem worth arguing over. But excuse me if I'm not impressed by claims of false comparisons...<p>On a serious note, I've been saying for decades that benchmarks are almost always meaningless, because different technologies will have different strengths and weaknesses, so you usually can't tell anything about how <i>your</i> use case will perform unless you actually test that use case. So, I would encourage everyone to run your own test and don't just go on other people's numbers. It's great that Fastly has opened up C@E for self-service testing so that people can actually try it out.
I was expecting a point-by-point comparison from Fastly. Instead, this comes off incredibly defensive and evasive, basically boiling down to two irrelevant points:<p>* Rust compiled to WASM is faster than Javascript (ok, but I'm not going to switch my entire stack just to use your CDN)<p>* Fastly performance throttles their free accounts (ok, but not Cloudflare's fault, and I definitely don't want to get into your pricing game if you're going to start offering different tiers of serverless workers... part of Cloudflare's allure is its simplicity in pricing).<p>They do have two good points:<p>* Benchmarks between CDNs should be done from the same client locations (fine, but why did you then use your own set)<p>* Cloudflare should not prohibit benchmarking of their products (fair)<p>But this whole post read like a feeble attempt at misdirection, and made me distrust Fastly as a company. The complaints about their methodology aren't solved by using your own, even more flawed, methodology that doesn't even use the same language. You could've just emailed them and asked to work together on an apple-to-apples test instead of creating an even more flawed benchmark.
As mentioned in the article, Cloudflare expressly prohibits benchmarking their services in the tos.<p>I think it's rather disingenuous for Cloudflare to publish their own benchmarks calling out competitors when they won't allow anyone to run their own tests and comparisons.
Cloudflare not allowing benchmarks in their TOS is very sketchy, that puts them in the same tier as Oracle.<p>Cloudflare have pulled enough shady stuff now that they've fallen out of my favor. Their generous free product bought them a lot of community goodwill but their real face has been showing the past few years.
> Cloudflare used a free Fastly trial account to conduct their tests. Free trial accounts are designed for limited use compared to paid accounts, and performance under load is not comparable between the two.<p><a href="https://www.fastly.com/pricing/" rel="nofollow">https://www.fastly.com/pricing/</a> does not make a distinction it says that "Any size company looking to give Fastly a try." They make distinctions on paying for more bandwidth, not that they are giving you an inferior product performance wise.
It's been hilarious watching competing companies publish warring blog posts declaring each other full of shit. I mean, I knew, but thanks for airing each others' dirty underpants.
Popcorn time ;) Let's wait for @jgrahamc to wake up<p>Simple solution is for Cloudflare to change TOS to allow benchmarking and for 3rd party to publish a reproducible benchmark suite.<p>However TBH benchmarking a distributed system like Cloudflare/Fastly is going to be REALLY HARD. There are 1000s of servers involved and reproducing the results might be impossible.
In the context of "should I use Cloudflare or Fastly as my edge", I lean Cloudflare. That said, I enjoyed listening to the GraphCDN crew choosing Fastly[1] – one crucial feather in the hat being cache invalidation[2] (miss ya, Phil Karlton) – and it sounds like a solid choice.<p>[1]: <a href="https://www.youtube.com/watch?v=lpmpTJc_SP0" rel="nofollow">https://www.youtube.com/watch?v=lpmpTJc_SP0</a>
[2]: <a href="https://graphcdn.io/docs/how-to/purge-the-cache" rel="nofollow">https://graphcdn.io/docs/how-to/purge-the-cache</a>
I think statistics like these tests are easy to sway in your favor, hence why Fastly is responding. Its not clear that Fastly didn't do the same thing, but a good ole' CDN rivalry does make for a good read. Someone get more mud to sling.
btw. for what it's worth their javascript to wasm is opensource:<p>- <a href="https://github.com/fastly/js-compute-runtime" rel="nofollow">https://github.com/fastly/js-compute-runtime</a><p>- <a href="https://github.com/tschneidereit/spidermonkey-wasi-embedding" rel="nofollow">https://github.com/tschneidereit/spidermonkey-wasi-embedding</a><p>and besides that it is slower than nodejs it is still plenty fast (no matter that it is not as fast as they want) btw. it's startup is faster than node. (maybe better pgo might help)<p>its insane what they built, to do it and it will bring the whole wasm community forward if they succeed (and I hope they do)
As a consumer it’s great that both companies are benchmarking. Even if Cloudflare’s was flawed. Maybe a case of, the best way to get the right answer is to post the wrong answer.<p>We’re at least moving towards objective comparisons, talking about numbers.<p>Cloudflare prohibiting benchmarking is not good, obviously.
Awesome, now where do I sign for the free fastly account? Oh right, they only do enterprise, so who the hell cares? Cloudflare OTOH caters for the whole spectrum.
I also read that Fastly is slower at specific point of times during a week, slower for free test accounts, slower at executing javascript (hence the switch to rust), and slower in locations carefully selected by CloudFlare.<p>I'm not saying that the benchmark from CloudFlare is fair, but it's a bit funny that their response contain a "let's switch to rust" or "let's benchmark next to our datacenters".
I'm regularly surprised why Cloudflare gets so much love specially here on HN. Cloudflare is slow, it man-in-the-middles websites, it shows a redirect page randomly for no apparent reason on websites that it powers, it does questionable things at the DNS level, it randomly shows captchas for no apparent reason, the list goes on. Almost every website nowadays uses this monstrosity. We're handing over the internet on a silver platter to them. I think there are better alternatives and I'm so glad at least Fastly is debunking their bs experiment.
This comes across as incredibly defensive in tone from Fastly. They could have just reported the results of their test. But calling their competitors outright liars reflects worse on them than anyone else.
Only semi-related, but why does Cloudflare perform so poorly in Oceania? I've noticed less than stellar performance in Oceania with Cloudflare's CDN, so to see similar stats with workers makes it seem it's not a product-specific issue.
<p><pre><code> > _We used a Wasm binary compiled from Rust rather than JavaScript.
> We know support for JavaScript is important to many customers, but we're not yet satisfied with the performance of Compute@Edge packages compiled from JavaScript. That's why it's in beta. When a product is ready for production, we remove the beta designation.
</code></pre>
I can't really square the idea that the 50-150ms time-delays in question comes down to the actual programming language, but it is absolutely believable that a longer test reduces the median latency rather than a high load test for a shorter duration.<p>Having said that, I would notice anything over 150ms in my clicks, but wouldn't care whether something took under 100 or under 50 - except that the lower the latency the less the scale needed to serve the same number of active users (it becomes a question of cost rather than response time).
The “DeWitt Clause” or sometimes also known as the “Oracle Clause” has been a fixture of ToS for years. The fact that it features in Cloudflare’s docs only serves to remind us that entrenched practices die hard.
<a href="https://www.brentozar.com/archive/2018/05/the-dewitt-clause-why-you-rarely-see-database-benchmarks/" rel="nofollow">https://www.brentozar.com/archive/2018/05/the-dewitt-clause-...</a>
"A fairer test on this point would have compared Rust on Compute@Edge with JavaScript on Cloudflare Workers, which are at more comparable stages of the product lifecycle."<p>Are there any widely agreed upon benchmarks on how emscriptened rust compares to untyped js? It feels like a large portion of the reported Delta boils down to that.
I am glad I said no to working for both. When I saw both are politically active and showed child like behavior during the interviews, I said no, and decided to dedicate some of my time to dog and cat rescue.
Seriously, who cares about 5ms difference? Or even 25ms. I don't. And Asia is just used to longer latencies.
Edge compute became a commodity even before it was born.
Cloudflare seems to encourage benchmarks of Workers Sites:<p><a href="https://blog.cloudflare.com/workers-sites/" rel="nofollow">https://blog.cloudflare.com/workers-sites/</a>