There is a corollary to this: Do you really need cloud infrastructure?<p>Cattle not pets right?<p>Well, no. Have you seen amazons AWS margins? Its 30%.<p>After amazon buys hardware, pays people to run it, it still makes 30%. Not having hardware is someone else's profit.<p>That isnt cattle, its contract poultry farming.<p>Learn capacity planing. Learn to write cachable, scalable apps. Track your hardware spend per customer. Learn about assets vs liablity (hang out with the accountants, they are nerds too). Do some engineering dont just be a feature factory. And if you are going to build features, make fuckin sure that you build tracking into them and hold the product teams feet to the fire when the numbers dont add up (see: friends with accountants, and tracking money).
Your first full-time sysadmin is an expensive hire. So is your first DBA. And even if your database backups are working now, there's a good chance they'll silently break in the next several years.<p>The simplest thing you could do is to build a single-container application, and deploy it a Heroku-like system with a fully managed database. If this actually works for your use case, then definitely avoid Kubernetes.<p>But eventually you'll reach a point where you need to run a dozen different things, spread out across a bunch of servers. You'll need cron jobs and Grafana and maybe some centralized way to manage secrets. You'll need a bunch of other things. At this point, a managed Kuberentes cluster is no worse than any other option. It's lighter weight than 50 pages of Terraform. You won't need to worry about how to get customized init scripts into an autoscaling group.<p>The price is that you'll need to read an O'Reily book, you'll need to write a moderate amount of YAML, and you'll need to pay attention to the signs reading Here There Be Dragons.<p>Kuberentes isn't the only way to tackle problems at this scale. But I've used Terraform and ECS and Chef and even a custom RPM package repo. And none of these approaches were signficantly simpler than Kubernetes once you deployed a full, working system for a medium-sized organization.
I’m sceptical of this article. I’m an indy dev using K8s at vultr (VKS) and it’s absolutely simplified my life.<p>The article suggests just using EC2 instead of K8s, but if I do that, I now have to manage an entire operating system. I have to make sure the OS is up to date, and balance all the nuances this entails, especially balancing downtimes, and recovery from upgrades. Major OS upgrades are hard, and pretty much guarantee downtime unless you’re running multiple instances in which case <i>how are you managing them?</i><p>Contrast to VKS where, with much less effort, OS upgrades are rolled out to nodes with <i>no downtime to my app</i>. Yes, getting to this point takes a little bit of effort, but not much. And yes, I have multiple redundant VPS, which is more expensive, but that’s a <i>feature</i>.<p>K8s is perhaps overly verbose, and like all technologies it has a learning curve, but im gonna go out on a limb here and say that I’ve found running a managed K8s service like VKS is <i>way</i> easier than managing even a single Debian server, and provides a pile of functionality that is difficult or impossible to achieve with a single VPS.<p>And the moment you have more than one VPS, it needs to be managed, so you’re back at needing some kind of orchestration.<p>The complexity of maintaining a unix system should not be underestimated just because you already know how to do it. K8s makes my life easier because it does not just abstract away the underlying node operating system, it obviates it. In doing so, it brings its own complexities, but there’s nothing I miss about managing operating systems. Nothing.
There's some legit notions here, but overwhelmingly it uses insinuation & suggestion to sow Fear Uncertainty and Doubt.<p>> <i>Despite its portability, Kubernetes also introduces a form of lock-in – not to a specific vendor, but to a paradigm that may have implications on your architecture and organizational structure. It can lead to tunnel vision where all solutions are made to fit into Kubernetes instead of using the right tool for the job.</i><p>This seems a bit absurd on a number of fronts. It doesn't shape architecture that much, in my view; it runs your stuff. Leading to tunnel vision, preventing the right tool for the job? That doesn't seem to be a particularly real issue; most big services have some kind of Kubernetes operator that seems to work just fine.<p>Kubernetes seems to do a pretty fine job of exposing platform, in a flexible and consistent fashion. If it was highly opinionated or specific, it probably wouldn't have gotten where it is.
For small teams I also think Kubernetes often greatly complicates the per-service operational overhead by making it much more difficult for most engineers to manage their own deployments. You will inevitably reach a point that engineers need to collaborate with infra folks, but in my experience that point gets moved up a lot by using Kubernetes.
Hey now, I made a killing in AWS consulting to convince megacorps to get rid of their own hardware and avoid going the OpenStack route.<p>The problems of pre-IaaS and pre-K8s were manageability, flexibility, and capacity utilization. These problems still haven't really been solved to a standardized, interoperable, and uniform manner because stacks continue to mushroom in complexity. Oxide appears to be on the right track but there is much that can be done to reduce the amount of tinkering, redundant abstractions, and avoiding conventional lifecycle management and cross-cutting concerns that people don't want to think about whenever another new way comes along.
I found that just using CloudRun and similar technologies is simpler and easier to manage than kubernetes. You need auto scaling, fast startup, limit number of concurrent connections to each instance, and scale to zero functionality.
No. We chose ECS instead :-)<p>That said, we are planning on doing a cloud exit in the future. I don't feel we need Kubernetes, but we do need to orchestrate containers. In our case, it's less scale, and more isolation.