The benchmarks appear to show AWS wins by a considerable margin in all tests, and is cheaper at all quoted price points, but is hardly above Azure in the cost performance table. This appears to be due to including Azure's reserved pricing but continuing to use AWS and GCP's on-demand pricing. This is misleading, as both GCP and AWS offer discounts for yearly commitments.
The article end up feeling rather overwhelming. The main difference and advantage AWS holds is that it uses the newer, more advanced (and more expensive) Neoverse V1 design, while GCP and Azure are based on Neoverse N1 design which is older, cheaper, and less performant. These are largely due to how these chip were designed by ARM. It may be argued that AWS also adds its secret sauce, but so far it feels unlikely. A cursory search leads to an phoronix article [0] which has a much more in depth comparison between V1 and N1 (through AWS's c7g vs. c6g instance types.) There are also upcoming N2 and V2 design; NVIDIA's Grace CPU is reportedly based on V2 design, which will be interesting to watch for.<p>The 41% discount thrown in at the end for Azure, without any explanation, was also jarring. Maybe there truly is promotional rate for Azure's ARM instance, but as another poster pointed out, it's likely reserved pricing, which is available for all the providers.<p>[0] <a href="https://www.phoronix.com/review/aws-graviton3-c7g" rel="nofollow">https://www.phoronix.com/review/aws-graviton3-c7g</a>
Was hoping to see the results, but the images were all blurry dummy placeholders (some sort of lazy loading for images?). The images had a text on "click here for preview", but when I clicked it, nothing happened (I have ad-blockers, maybe that's why some JS code was disabled).<p>But what the heck, this is not how wepages should work - click here to "unblur" the image? Why add such an extra step? To save bandwidth cost?
Curious how they would compare to a non-cloud option <a href="https://www.hetzner.com/dedicated-rootserver/matrix-rx" rel="nofollow">https://www.hetzner.com/dedicated-rootserver/matrix-rx</a>.
I think the title is misleading - it is not a server performance comparison, it is just a comparison on a single application using a single kind of workload on a fixed dual core setup.<p>It also lacks basic insights into the results, e.g., when both AWS and GCP are using ARM's Neoverse V1 design, why those two servers are having such significant performance gap? Maybe it was just caused by some bad software configuration in the stack which can void all relevant results?<p>When there are up to 64 cores available, why only 2 cores were used? Surely other 62 cores are relevant, right?
So... the news here is that GCP seems to be significantly slower in practice, where all three products are priced at the same spot.<p>But... the weird question is why are the Azure financials funny? The listed prices are about the same, but the "annual cost" for Azure seems to include a "41% off" discount that I can't find an explanation for? And then they use the latter in the calculation of price efficiency, which seems... what? This is the kind of thing Comcast does in its marketing. What is that discount and what's the justification for assuming that all users get it in perpetuity?
I just ran into issues last night with t4g instances.<p>For the most part them have been great and the performance has been better for asp.net apps than Intel/AMD.<p>But I ended up with a stack overflow exception with System.Text.Json in my personal project that only occurs on arm but not others.<p>Still love the graviton instances.
I have a process in AWS that can run on ARM or AMD64 machines.<p>It runs in AMD64 machines because, for some reason, the Graviton instances are killed without warning by AWS.<p>So yeah, great performance, as long as you don't actually need it.