TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Kubernetes a black hole of unpredictable spend, according to new report

126 点作者 eminemence将近 4 年前

15 条评论

moksly将近 4 年前
I think the articles headline is a little rude to Kubernetes. I’m by no means a fan of Kubernetes, especially not in non-tech enterprise, but the article is really about the unpredictable and rising cost of moving into the cloud that is owned by the big tech companies, isn’t it? Sure kubernetes can be part of that, but you can easily run into the same predicament without it.<p>The unpredictability of cost is actually the prime reason we stuck to our own cloud, where we rent (technically we buy the hardware that the company hosts, but it’s not really ours, we just use it till it breaks) the iron at a known rate. Which is just better for a public sector budget than paying by mileage, at least if anyone outside of the IT department bothers to look into what they are signing off on.<p>The really interesting part will be where we go from here. Moving from self-hosted to rented iron that we run our virtual servers on, was a fairly simple move that would be easy to reverse. The move into the cloud is even easier, but unless you’re careful, it could be very costly to get out.
评论 #27680552 未加载
评论 #27681234 未加载
评论 #27680387 未加载
评论 #27679188 未加载
tyingq将近 4 年前
<i>&quot;Less than 25 per cent of those surveyed said they could accurately predict how much they’d spend on Kubernetes to within 5 per cent of actual cost.&quot;</i><p>The premise seems off to me. Of course people have a hard time predicting the cost of an autoscaling infrastructure that they haven&#x27;t had for a long time.<p>Presumably they moved off of a fixed size infrastructure to get to Kubernetes. Where they were either paying for excess capacity on some days, or paying in the form of poor performance when demand exceeded supply.<p>Five percent accuracy seems like a high bar, and you would want a year or two to understand your seasonality and growth rate, etc.
评论 #27679735 未加载
评论 #27681473 未加载
WYepQ4dNnG将近 4 年前
I have experienced first hand several cases of k8s gone wrong. In the end I have come to the conclusion that most companies don&#x27;t really need the complexity of k8s.<p>Seriously, most k8s projects I have been involved with required so much effort to bootstrap and keep it going, it just blew me away! The experience for the average developer was just frustrating and infuriating: AWS ECS to the rescue!<p>Some will argue: vendor lock in! Really? I bet most services out there are already vendor locked in, just go with the flow and make your life easier.<p>I have seen companies failing because investing so much in building infrastructure, supposly vendor lock in free (or so they thought) that they lost sight and did not invest enough building the actual product: no revenue -&gt; party is over.<p>Don&#x27;t make the same mistake.
评论 #27681607 未加载
评论 #27681161 未加载
评论 #27682290 未加载
评论 #27686697 未加载
frompdx将近 4 年前
The headline seems a bit hyperbolic after reading the article. The bar chart has the caption <i>Accurate prediction of Kubernetes costs is a challenge</i>. However, the chart shows that one in five respondents represented in the data don&#x27;t bother to predict their costs at all. Over half can predict within 10%, which seems fairly reasonable even if there is room for improvement. That leaves the remaining 20% who really are struggling to accurately predict their costs with better accuracy than 25%.<p>The trouble with all of this is that it doesn&#x27;t really account for how the respondents use Kubernetes. What type of workloads are they running and how variable are those workloads? Would the organizations struggling to predict costs still struggle using another solution if their workloads are highly variable? Are they trading fixed costs for scalability in the face of those variable workloads? It&#x27;s certainly possible to set upper bounds to autoscaling and to run fixed sized workloads in Kubernetes.<p>Perhaps the best takeaway from the article is that there is an opportunity to develop better cost management tools or offer consulting services in this domain. I know there are a few companies out there hoping to offer services in this space already.
gnivol将近 4 年前
This is not counting the people costs involved, kube experts ain&#x27;t cheap. Complexity is only going up.<p>Suggestion for next article -&gt; &quot;Software a black hole of unpredictable spend&quot;
评论 #27679009 未加载
punkrex将近 4 年前
A lot of the “finops” practitioners I’ve seen are myopically focused on tagging of AWS resources; and that falls to pieces with kubernetes because AWS can’t see inside the kubernetes clusters.<p>I’m not surprised they don’t like it.
评论 #27680416 未加载
评论 #27680265 未加载
alongub将近 4 年前
I wrote a tool that helps estimate K8s costs by simulating K8s clusters. You write your pods in a simple DSL and it runs kube-scheduler without actual nodes behind the scenes.<p>It&#x27;s still <i>really</i> basic but I&#x27;d love to hear your feedback!<p><a href="https:&#x2F;&#x2F;github.com&#x2F;aporia-ai&#x2F;kubesurvival" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;aporia-ai&#x2F;kubesurvival</a>
评论 #27681992 未加载
mplewis将近 4 年前
I mean, what did anyone think the cloud was? This isn&#x27;t news.
Sparkyte将近 4 年前
This article is written from the perspective of companies that failed to plan for kubernetes and just used it willie nillie. So... pointless.
avereveard将近 4 年前
Kubernetes will happily run without autoscaling. This article is barking at the wrong tree.
评论 #27681086 未加载
deknos将近 4 年前
i&#x27;ve seen enough installations that this is not the fault of kubernetes, but that they think this should work automatically. this has to be implemented to count as well.
crmd将近 4 年前
As opposed to the annual corporate VMWare enterprise license shock and awe ritual we go through every year. Infrastructure platforms are wicked expensive.
评论 #27683333 未加载
StratusBen将近 4 年前
Disclaimer: I&#x27;m Co-Founder and CEO at <a href="https:&#x2F;&#x2F;vantage.sh&#x2F;" rel="nofollow">https:&#x2F;&#x2F;vantage.sh&#x2F;</a> - a cloud cost transparency platform.<p>We have been hearing this a lot from our customers who use EKS. They are running single clusters as shared infrastructure so have no insight into which workloads are contributing the most costs. This is true with other shared infra like data pipelines.<p>We are currently working on a solution for pod-level cost insights if anyone is interested in signing up for the beta shoot an email to ben@vantage.sh
cyberge99将近 4 年前
Why does the graph have a legend when the only color is green?
tristor将近 4 年前
I think this headline is hyperbole, but also somewhat true, but not for any fault of Kubernetes. I&#x27;ve worked in this space extensively, and have been called in to consult in some variety or another on a number of large enterprise Kubernetes deployments. Nearly universally I found the following things to be true:<p>1. Companies had critical infrastructure for the success of Kubernetes owned by teams that opposed deploying Kubernetes<p>2. The primary person shepherding Kubernetes into the company&#x27;s environment had not done their due diligence on what were appropriate workloads for Kubernetes and what were not and how applications would integrate across mixed environments when required.<p>3. The principal tech resources at the company were not educated about containerization, Kubernetes, and the intricacies of container networking but were on the hook internally for the implementation.<p>What ends up driving the &quot;black hole of unpredictable spend&quot; is that companies are sold (either internally or externally) on a relatively short migration timeframe, but that timeframe is contingent on the company having appropriate infrastructure, staffing, and no key persons internally blocking said migration. If any factor is out of whack the migration timeline can quickly approach infinite.<p>While it is true that there are startups that could run everything they need for their first 10k customers on 5 VMs w&#x2F; Nginx &amp; MySQL that decide to build grandiose environments in Kubernetes they don&#x27;t need. The opposite is also true, which is that there are huge enterprises who could in reality massively benefit from Kubernetes in their environment but for &quot;political&quot; reasons can&#x27;t get it done even after spending millions of dollars, so are stuck mired in their &quot;legacy&quot; environments. Networking, in particular, is a huge barrier of entry for enterprise Kubernetes deployments and are almost always stymied by people, not technology, because most enterprises have some Boomer network admin who doesn&#x27;t actually know anything about networking but only knows about Cisco gear running things.<p>So, what do companies do? They go to AWS or GCP and they just run up a &#x2F;massive&#x2F; bill, as they very very slowly migrate (often rewrite) their legacy systems to the cloud. This is of course astronomically and unnecessarily expensive, but it&#x27;s generally not the fault of the underlying technology. AWS and Google are happy to bilk major enterprises as well, and often sell them a bill of goods they can&#x27;t deliver on.