At this point, why not just drive it up to the logical conclusion? Treat your business model as cattle, not pets. Customers leaving? Fire up another business until capital runs out, and if it does, no worries, jut hop to another job!<p>Sorry, but I feel like I landed in crazy-land. Kubernetes is already an exercise in how many layers you can insert with nobody understanding the whole picture. Ostensibly, it's so that you can <i>isolate</i> those fucking jobs so that different teams can run different tasks in the same cluster without interfering with each other. Hence namespaces, services, resource requirements, port translation, autoscalers, and all those yaml files.<p>It boggles my mind that people look at something like Kubernetes and decide "You know what? We need more layers. On top of this."
If I've read the Google papers on borg right (Kubernetes is conceptually borg v3 with omega being v2) this is different from how Google runs the things.<p>They'll do warehouse scale computing with borg operating large clusters. borg is at the bottom.<p>The workloads spanning dev, test, and prod then run on these clusters. By having large clusters with lots of things running on them they get high utilization of the hardware and need less hardware.<p>It's amusing to see k8s used in such a different way and one that often uses a lot more hardware while driving up costs. Concepts Google used to lower the cost.<p>Or, maybe I read the papers and book wrong.<p>I like the idea of higher utilization and better efficiency because it uses less resources which is more green.
> It means that we’d rather just replace a “sick” instance by a new healthy one than taking it to a doctor.<p>This analogy really bothers me. Cattle are expensive. They are an investment. You don't put down an investment just because it got sick.<p>If you have a sick cow you will in-fact call your local large animal vet to come and treat it.
AWS in my mind can quickly lose the kubernetes war amongst cloud providers. This is every cloud providers chance: EKS on AWS is so damn tied into a bunch of other AWS products that it's literally impossible to just delete a cluster now. I tried. It's tied into VPCs and Subnets and EC2 and Load balancers and a bunch of other products that no longer makes sense now that K8s won.<p>In my opinion it needs to be re-engineered completely into a super slim product that is not tied to all these crazy things.
I like the idea of "Building on Quicksand" as the analogy for Distributed Systems, but also maintaining your software dependencies. This article basically recommends trying to minimize your dependencies to keep reproducibility/portability high. I generally agree with this, but also carry an "all things within reason" mentality. But just as the article describes coworkers growing into their cluster, the complexity of what they run in their cluster will also grow over time and eventually they'll realize they've just built up their own "distribution". A few years ago, I've written a post asking people to think critically when they hear someone mention "Vanilla" Kubernetes[0].<p>The real problem they suffered is actually that Kubernetes isn't fundamentally designed for multi-tenancy. Instead, you're forced to make separate clusters to isolate different domains. Google themselves run multiple Borg clusters to isolate different domains, so it's natural that Kubernetes end up with a similar design.<p>[0]: <a href="https://jzelinskie.com/posts/youre-not-running-vanilla-kubernetes/" rel="nofollow">https://jzelinskie.com/posts/youre-not-running-vanilla-kuber...</a><p>Disclosure: I worked as an engineer and product manager on CoreOS Tectonic, the (now defunct) Kubernetes used in the post.
When I worked at Asana, we created a small framework that allowed for blue-green deployments of Kubernetes Clusters (and the apps that lived on top of them) called KubeApps[0].<p>It worked out great for us -- upgrading Kubernetes was easy and testable, never worried about code drift, etc.<p>[0] <a href="https://blog.asana.com/2021/02/kubernetes-at-asana/" rel="nofollow">https://blog.asana.com/2021/02/kubernetes-at-asana/</a> (Not written by me).
So I was going on vacation, and had to leave my cats at a “cat hotel.” It cost about $50 a night.<p>I looked it up and for the cost of putting them up in the “hotel” I could have them euthanized and buy new cats three times over.<p>I would never do such a thing, but I did use it to guilt them into not complaining about the hotel.<p>It didn’t work very well.
Running a separate cluster for every service assures high overhead and poor utilization. Fine if you can afford it, but be aware that you are paying it.
Raising cattle is a lot of work. You have to weigh them regularly, apply treatments for intestinal worms, for lice, move them from pasture to pasture so they don't overgraze. It's a fulltime job.<p>Also if a cow dies, people don't just buy a new one. It represents quite a loss of profit. Also represents a big potential problem on the farm that people will want to resolve - they're you're money makers, if they're dying it's an issue.
I want to go back in time when naming your servers as X-men characters or Dune houses was a thing. I'm not a big fan of this brave new DevOps world.
All this does is make me want to go vegan and avoid maintaining the entire k8s farm.<p>Truly, if your software team headcount is under 500 why are you running k8s?
> It means that we’d rather just replace a “sick” instance by a new healthy one than taking it to a doctor.<p>Oh god! Please treat your cattle better!
> ... the GitOps pattern is gaining adoption for Kubernetes workload deployment.<p>Is it really though? I for one am glad I didnt jump on the bandwagon early. A lot of the articles popping up nowadays mentioning the downsides of GitOps make a lot of sense.
Kubernetes is already insanely complicated. In practice minor version differences of anything in the stack leads to issues. I get why it all exists but at some point I have to ask, are containers really <i>that</i> much better than an rpm/deb and private corporate repo? Every container is a effectively a chroot. Add to that the lack of easy debugging you’d get from simple packages and daemons. I get that doesn’t work for “cloud” scale or whatever, but i think the excitement over this stuff is overblown.
When I was at BigCo and our requirements became so complex and demanding that we had to migrate onto serious containerization and orchestration software, well, it was necessary but we all pined for the days when it was a dozen services and 20k boxes and we didn’t need that shit.
The level of madness and overengineering in the kubernetes world is only comparable to the level of madness going on in the React world.<p>Everyone seems to be thinking they're Google or Facebook, or both.
Not everyone in the world practices animal husbandry, so the "cattle" metaphor doesn't make a lot of sense to some of us, like me.
I have no idea how "cattle" should be treated, other than they are killed/used for resources.