TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

For the love of god, stop using CPU limits on Kubernetes (updated)

47 pointsby ciceryadamover 2 years ago

7 comments

ekimekimover 2 years ago
The argument against this is consistency. Without a limit set, you are only guarenteed up to your request's worth of cpu, but you will often be allowed to have more. This can create a false sense of security, as your application is working fine (even though it occasionally exceeds its request). Until one day, when a neighbor happens to get thirsty, and your application suddenly breaks. Limits front-load the brokenness so that it happens immediately instead of randomly.
评论 #32999838 未加载
评论 #33000287 未加载
iknownothowover 2 years ago
This advice comes from tunnel vision and makes perfect sense if you know that you have exactly two pods running at any given time. But if you have exactly two pods, then why bother use k8s? IIRC one of the major selling points of K8s was on-demand scaling or auto scaling horizontally. Which means the number of pods you have in the cluster is dynamic.<p>In the context of pods dynamically spinning up and spinning down, it&#x27;s bad when a pod replica can&#x27;t be allocated in the cluster &quot;predictably&quot; but there is nothing worse than when a new pod (new deployment) fails because &quot;Marcus the pod&quot; drank all the water and now I have to call DevOps and wait god knows how long before they spin up a new node to guarantee a spot for the new pod.<p>Bin-packing is a already an np-hard problem. If you remove limits from CPU then you&#x27;re adding probabilities into the mix. So, for the love of god, always use limits unless you have a very specific use case.
评论 #33002928 未加载
988747over 2 years ago
The reason to never use CPU limits is different than those stated in the article. In short: Linux kernel SUCKS. More specifically, the &quot;Completely Fair Scheduler&quot; (CFS) sucks at enforcing those limits. Setting any limit at all causes CFS to waste like half of CPU cycles on enforcing it, and only the other half is available for any useful work.
评论 #33004348 未加载
评论 #33002374 未加载
lazyantover 2 years ago
Looks to me the author hasn&#x27;t run different workloads in different production clusters of any complexity. Advise is fine for a small predictable cluster but too simplistic for any real complex cluster.
评论 #33002954 未加载
rahenover 2 years ago
We use CPU limits at work for the simple reason we can&#x27;t autoscale deployments without having them set. An HPA will deploy a new pod each time the CPU limit has been reached for more than 30 seconds.<p>The whole point is to scale out, not up.
评论 #33002946 未加载
评论 #33004355 未加载
skydeover 2 years ago
I don’t agree on the recommendation for memory « Always set your memory requests equal to your limits »<p>you can layer high priority service and low priority service better if you use some buffer.
评论 #33011793 未加载
评论 #32999411 未加载
birdyroosterover 2 years ago
I don&#x27;t think this is a remotely compelling argument to never use limits.
评论 #33002293 未加载