TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Workloads on Arm-based AWS instances

91 pointsby BiraIgnacioover 1 year ago

14 comments

TheDongover 1 year ago
In the &quot;test setup&quot;, it says: &quot;a t3a.micro&quot; and &quot;a t4g.micro&quot;.<p>To me, this implies they used a single ec2 instance of each size. However, ec2 instance p99s or so can be impacted by &quot;noisy neighbors&quot;, especially on the burstable types which are intentionally oversubscribed.<p>It&#x27;s still useful to know if, for example, t4gs are more prone to noisy neighbors, but with only 1 instance as a datapoint, you simply can&#x27;t tell if it was bad luck or not.<p>I think this test would be much better with either only dedicated instance types, or by running it with a large n such that an individual unlucky&#x2F;noisy-neighbor doesn&#x27;t influence the results overtly.
评论 #37819675 未加载
iknownothowover 1 year ago
Aren&#x27;t &#x27;t&#x27; instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.<p>&gt; It does appear that the Arm-based instances can’t consistently maintain the same performance at high request rates.<p>I&#x27;m unwilling to trust that statement at face value for now given it&#x27;s been tested against a &#x27;t&#x27; instance.<p>EDIT: Removed note about network burst credits in compute and memory optimized instances. I&#x27;m not sure if these instances have that.
评论 #37817966 未加载
customizableover 1 year ago
Personal experience: We moved multiple PostgreSQL servers including a large one using 32 vCPUs to the equivalent ARM based instances, and the performance was about the same, but of course ARM instances are less expensive.
LatticeAnimalover 1 year ago
Given the title, I would have expected a price&#x2F;perf comparison across multiple tiers of servers. Focusing on two random (but similar) low performance instances makes it hard to generalize.
Espressosaurusover 1 year ago
A couple recommendations for your visualization:<p>1) More fine-grained bins to help show the shape of the distribution (are there performance cliffs?). Try using vertical lines to denote % cutoffs.<p>2) Given the wide range between your bins, a log scale might be a good idea instead of raw frequency.<p>3) Try some other method of visualization. I&#x27;m not sure a histogram is useful for what you&#x27;re trying to convey, at least the way it&#x27;s being used here.<p>As it stands, the visual information is so dominated by the 99.5% case that the plots don&#x27;t help illustrate your tabular data.
评论 #37818005 未加载
nodesocketover 1 year ago
Highly recommend ARM-based instances for RDS and ElastiCache in particular. That&#x27;s a easy instance type switch and nearly idiot proof. Switching Kubernetes cluster worker nodes is another story (though ARM built container adoption is getting better).
opentokixover 1 year ago
I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.
upon_drumheadover 1 year ago
I don’t understand how 99.99999 is larger than max.
评论 #37818735 未加载
kylegalbraithover 1 year ago
We leverage Arm instances in Depot [0] to power native Docker image builds for Arm and I would say we see a lot of performance improvement in with machine start, requests per instance, and overall response rate. Granted, we aren&#x27;t throwing the number of requests at our instances that this test is looking at. But, we are throwing multiple concurrent Docker image builds onto instances and generally speaking, they do great.<p>All of that to say, I think the t3&#x2F;t4 instance used in this test is a bit problematic for getting a true idea of performance.<p>[0] - <a href="https:&#x2F;&#x2F;depot.dev&#x2F;">https:&#x2F;&#x2F;depot.dev&#x2F;</a>
bloopernovaover 1 year ago
The r6g Arm vCPUs we tried in our AWS Neptune performance testing always seemed to perform worse than the equivalent-in-price r5d.4xlarge we normally use. Unfortunately I didn&#x27;t have time to really dig into what it was about our design&#x2F;workload that caused the different results. I wish I could have dug deeper, especially since now there&#x27;s more types of vCPU available than when we ran our tests: x2g, r6i, and x2iedn.
neonsunsetover 1 year ago
Right now the web framework of choice in Rust tends to be Axum. Also, no data on CPU utilization which can be different when targeting ARM. You may also want to include .NET which has really good support for ARM64.<p>Also 2, t4g instances use Graviton 2 which has, relatively speaking, weaker cores. To get best experience you would need to compare versus Graviton 3 (but these are more expensive, but you can deploy to them in a denser manner).
znpyover 1 year ago
The instances gathered for setup are absolutely the worst: t3a.micro and t4g.micro.<p>Such instances share the vcpus, and only get burst of dedicated cpus, then they&#x27;ll get throttled for performance.<p>The author should have picked any of the other instances. The bare minimum to make an informed decision should be the c6g.medium or the c7g.medium.<p>In my experience btw, the c7g family really seems to be closing the gap in single-threaded performance with x86-based instances.
评论 #37825848 未加载
monlockandkeyover 1 year ago
I wonder how much does Arm save data centers in electricity and CPU cost. Not only cheaper to fabricate but also cheaper to run?
评论 #37819419 未加载
59nadirover 1 year ago
Was this tested with dedicated instances? Would there be a potential difference if it was?