TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The full-time job of keeping up with Kubernetes

484 点作者 twakefield超过 7 年前

11 条评论

tw1010超过 7 年前
There aught to be a name to the tendency that as tools get better and better, the more your time goes from having your mind in technical-space to social and news-space. It's like the authority to create goes from the individual first-principles (by necessity) maker, to the control over development being in the hands of an external group, and then all your time is spent keeping up with what they're doing. A similar thing happened with a lot of javascript frameworks. It also happened with the transition from building servers from the ground up, to it all being managed by AWS.
评论 #16285665 未加载
评论 #16286012 未加载
评论 #16285974 未加载
评论 #16285837 未加载
评论 #16285929 未加载
评论 #16285945 未加载
评论 #16287306 未加载
评论 #16286229 未加载
评论 #16285808 未加载
评论 #16285677 未加载
评论 #16286296 未加载
评论 #16287469 未加载
评论 #16286871 未加载
评论 #16289115 未加载
hueving超过 7 年前
This reads like a giant ad for GKE. It emphasizes several times to just use GKE for pretty lame reasons (Google has good SREs and Google started the project).<p>The people that work on upstream k8s in Google (Tim et al) have a pretty limited overlap with the Google Cloud people that run GKE. Upstream k8s is a full time job so they are most certainly not spending their time also writing internal GKE code.<p>I don&#x27;t have an issue with GKE, but this article uses little evidence to recommend it when it seems the conclusion should have been &quot;maintaining a k8s cluster requires a full time sysadmin. If your company has a culture of pretending sysadmins are pointless, then you should pay another company offering k8s sysadmin-as-a-service hosted on their hardware.&quot;
评论 #16287304 未加载
评论 #16285906 未加载
评论 #16288847 未加载
评论 #16285888 未加载
评论 #16287530 未加载
评论 #16287019 未加载
mbrumlow超过 7 年前
Every time a new framework or tool comes out and everybody jumps on it. I always wonder if somebody will realize that you are trading one set of problems and work for another.<p>As engineers we really need to stop supporting these sort of effort and take the time to help each other become better engineers that write and maintain our own code. We need to promote learning and mastering the underlying concepts that things like kubernetes tries to hide and shield engineers from.<p>In most cases tools like kubernetes are so vast and huge so they can be the solution looking for many problems.<p>It is also curious how once kubernetes became big how many small shops needed &quot;Google&quot; level orchestration to manage a hand full of systems. And how hard people ripped their software stack apart into many many micro services just to increase the container count.<p>I think if most engineers took a step back and said &quot;I don&#x27;t know&quot; and took some time to truly understand the requirements of the project they are working on they would find a truly elegant and maintainable solution that did not require tweaking your problem to fit a given solution.<p>Every tool and library &#x2F; dependency you add to your solution is only adding more code and complications that you will not be a expert in and one day will find your self at the whim of the provider.<p>Far to often do we include tens of thousands of lines of code of somebody else&#x27;s work all for a handful of lines of code that if somebody would have had the confidence and support from other engineers around to try and truly understand the problem domain could have implemented and owned the solution.<p>The general trend I see as I get older is that we are valuing the first to a solution rather than a more correct solution. Only to be stuck with a solution that requires constant care and work around.<p>So I plead to all engineers, devlopers, programmers or whatever you call your self. Please stop and take a moment and think hard about how you would solve any given problem without the use of external code first. Then compare your solution to the off the shelf &quot;solution looking for a problem&quot;. You might surprise your self.<p>I will also like to point out that if when solving a problem your solution looks like a shopping list of third party tools libraries and services; you might not fully understand the problem domain.<p>-- sorry for the rant --
评论 #16286440 未加载
评论 #16287367 未加载
评论 #16286643 未加载
评论 #16288429 未加载
评论 #16287471 未加载
macNchz超过 7 年前
&gt; The absolute safest place to run Kubernetes application is still Google’s GKE.<p>Interesting to read given I recently had a GKE cluster auto-upgrade its master version from 1.6.x to 1.7.x at 8pm one night (however foolish it was to not be subscribed to the release notes RSS Feed[1]), which somehow caused a cascading nightmare of things breaking.<p>Logs stopped appearing in the GCP logging interface and in our own log parsing pipeline, a bug related to a change in the format of the yaml specifications meant that all the containers got stuck in a broken limbo state as we tried to upgrade the nodes (yay for googling the error and finding open github issues!), and then once we&#x27;d manually fixed all of our deployments and were finally able to get our nodes rolled over to the new k8s version, all of our load balancers started intermittently timing out until they were deleted and recreated.<p>Surely we need to keep a closer eye on the release cycle, and we&#x27;re guilty of bandwagonning onto this cool new tech, but boy does it suck to get auto-upgraded at night only to discover several breaking changes while in emergency mode trying to fix things.<p>(1) <a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;kubernetes-engine&#x2F;release-notes" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;kubernetes-engine&#x2F;release-notes</a>
shruubi超过 7 年前
This article concerns me, especially considering the first thing you see is &quot;There is no such thing as Kubernetes LTS (and that’s fantastic)&quot;.<p>What is so great about running your infrastructure on a platform that has no intention of ensuring long-term stability? Irregardless of how well backward-compatibility is maintained, the idea that we should all move our infrastructure to something that lacks the fundamental promise of &quot;updating won&#x27;t break everything&quot; seems downright irresponsible.
评论 #16286635 未加载
评论 #16286648 未加载
评论 #16286641 未加载
评论 #16286419 未加载
评论 #16287922 未加载
评论 #16288029 未加载
halayli超过 7 年前
This is a problem I faced using ansible, webpack + js modules, and more. it&#x27;s a moving ground and you always need to keep up to date with the latest changes often which are breaking.<p>I tend to design my systems so that they work even if I haven&#x27;t touched a line in a year but when using such tools it&#x27;s always a pain.<p>I wish things were as stable as a bourne shell and unix environment in general. Not that they achieve the same but I just miss the stability I get from raw unix tools.
评论 #16287032 未加载
评论 #16286356 未加载
peterwwillis超过 7 年前
trigger warning: bitter jaded ops person working in a real company<p><i>&quot;[...] users are expected to stay “reasonably up-to-date with versions of Kubernetes they use in production.” [...] the upstream Kubernetes community only aims to support up to three “minor” Kubernetes version at a time. [...] if you deployed a Kubernetes 1.6 soon after it came out last March, you were expected to upgrade to 1.7 within roughly nine months or by the time 1.9 shipped in mid-December.&quot;</i><p>Jesus christ this is so annoying.<p>Businesses don&#x27;t have a couple hundred billion dollars sitting around to spend on engineers to look at release notes, compare changes, write new features, write new test cases, fix bugs, and push to prod, <i>every 3 months</i>, just to <i>keep existing functionality</i> for orchestrating their containers.<p>We have LTS because businesses (and individuals) don&#x27;t want to have to do the above. They just want a reliable tool. They want the ability to say that if a bug is found in 3 years, it will be fixed, and they can just keep using the tool.<p>We don&#x27;t give a crap about &quot;Kubernetes’ domination of the distributed infrastructure world&quot;. We don&#x27;t want to use Kubernetes. We just want an orchestration tool - commodified tooling. We want to stop caring about what we&#x27;re running. We just want the fucking thing to work, and to not have to jump through hoops for it to work.<p><i>&quot;Moving Kubernetes Workloads to New Clusters instead of Upgrading&quot;</i><p>UGH. We only do this for bastardized unholy stupid shit like OpenStack. Not only is this not fun, it takes forever (you try moving 50 different clients off the service they&#x27;ve been using for three years), and you have to have duplicate resources. What the fuck is the point of cloud computing and containers and all this bullshit if I have to have double the infrastructure and juggle how it&#x27;s all used just to upgrade some fucking software?!??!?!<p><i>&quot;The Kubernetes-as-a-Service offerings, particularly Google Cloud’s Kubernetes Engine (GKE), are the well-polished bellwethers of what is currently the most stable and production-worthy version of Kubernetes.&quot;</i><p>Oh. We&#x27;re supposed to pay Google to run it for us.<p>....I&#x27;m just going to use AWS.
评论 #16286615 未加载
评论 #16287598 未加载
评论 #16286552 未加载
maxxxxx超过 7 年前
I think this is a symptom of the &quot;release often&quot; philosophy. With yearly or longer releases you could actually keep up and read the release notes. With stuff being released several times a year it&#x27;s too much work to keep up unless you are deeply into it at the moment.<p>I notice the same with my Android apps. I used to read release notes of new versions but now I have it on auto update and am sometimes surprised that an app I have been using all the time has completely changed and I don&#x27;t know how to use it anymore.
评论 #16286409 未加载
yeukhon超过 7 年前
Kubernetes’ governance is becoming like Openstack and (I know this is controversial), I hate Openstack, especially because it tried so hard to be “AWS” compatible, and APIs are so awkward to use.<p>Cloudfoundry is better in terms of governance and project’s direction. Many of the main developers work full time at Pivtoal. But it is hard to run your own CF without significant investment like access management and “painless” upgrade (etcd is a pain in the entire CF stack in my experience). Though I have to admit the project is moving in the right direction in the past year or so.
评论 #16290484 未加载
评论 #16288087 未加载
scarface74超过 7 年前
Combining my brief time trying to put together a proof of concept with Kubernetes and reading articles like this, I&#x27;m so glad I chose Hashicorp&#x27;s Nomad. It&#x27;s simpler to configure, more versatile (shell scripts, executables, and Docker containers) and a decent third party UI - HashiUI. With Consul, configuration is dead simple.
atulatul超过 7 年前
Dr Dobb&#x27;s articles below reflect a somewhat similar feeling<p>Just Let Me Code <a href="http:&#x2F;&#x2F;www.drdobbs.com&#x2F;tools&#x2F;just-let-me-code&#x2F;240168735" rel="nofollow">http:&#x2F;&#x2F;www.drdobbs.com&#x2F;tools&#x2F;just-let-me-code&#x2F;240168735</a><p>Getting Back To Coding <a href="http:&#x2F;&#x2F;www.drdobbs.com&#x2F;architecture-and-design&#x2F;getting-back-to-coding&#x2F;240168771" rel="nofollow">http:&#x2F;&#x2F;www.drdobbs.com&#x2F;architecture-and-design&#x2F;getting-back-...</a>