I'm confused about how the documentation recommends using a Kubernetes operator to manage OS updates. That seems weird and backwards to me. I would rather see an immutable OS AMI in an auto-scaled group, and just replace the node instance whenever there is an update.<p>I can see a place for managing OS updates on an instance, but that seems more like "pets" than "cattle"... and I've always treated Kubernetes nodes like cattle, not pets. Isn't that the most common approach anyway?
As strong as the engineering behind Bottlerocket seems to be, I'm not entirely sure who they built it for, except as a foundational component for AWS's managed offerings.<p>If you, as an AWS customer, decide to fully embrace AWS lock-in, then why would you run this yourself on an EC2 instance instead of running ECS or EKS? If you're trying to avoid AWS lock-in, why would you choose an OS that's locking you into AWS Systems Manager and Amazon Linux 2 for debugging needs?
Firecracker, Bottlerocket, starting to see a trend here<p><a href="https://aws.amazon.com/blogs/aws/firecracker-lightweight-virtualization-for-serverless-computing/" rel="nofollow">https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...</a>
So difference between this and Firecracker would be that the latter is boot-speed and overhead optimized, and this one is a bit heavier but more capable?<p>If choosing between this and say Kata Containers plus Firecracker, the latter would be more secure because of VM isolation but this would be more efficient because multiple pods could go in a single VM?<p>Is Bottlerocket secure enough to host multi-tenant workloads within the same VM?
For a project similar to bottlerocket, checkout <a href="https://github.com/talos-systems/talos" rel="nofollow">https://github.com/talos-systems/talos</a>. It is geared for cloud, VMware, and bare metal users. We have integrations with Cluster API for each, with the bare metal provider being our implementation: <a href="https://github.com/talos-systems/sidero" rel="nofollow">https://github.com/talos-systems/sidero</a>. Full disclosure, I am the CTO of Talos Systems.
I haven't dug into the engineering behind this yet, but my main concern with any custom Linux distribution is it often ends up as a waste of engineering.<p>It's pretty easy to write your own Distro and pair it down to the essentials, and pairing it down that way allows you to strip out complexity, making it easier and more reliable to patch it. But that then means you are now maintaining this custom thing forever. If you're Amazon, that might be fine, but I suspect that this will be dropped when it is no longer profitable or a competing project supplants it - meaning in 5 years this thing might be gone. (A common theme of custom Linux distributions)<p>And then there's troubleshooting. With a stripped-down distro, you will eventually need more tools to debug the thing, meaning you have to build and maintain packages to do that. Bottlerocket's answer to this is "run them in containers and use our API!", but I'm not sold on this. Have you ever tried to do debugging between host and container, or container to container? There's a lot of b.s. you have to hop through, and most Unix tools were not written with it in mind. I highly doubt that it will magically work for everything. If that's the case, then this "don't worry, because <i>magic</i>" idea is not really saving you work over maintaining a traditional OS.<p>Moreover, you don't need a custom distro to do live patching. There are simple tricks you can use to juggle files and processes during a live patch, to say nothing of "checkpoint process && 'mv old_file new_file' && thaw process", etc. Kernels support live patching too. So if the argument is "well it's easier to patch", i'm not sure you're not trading away "easy" in one place for "pain in the ass" in another (see above). All of this also argues that it's just as effective to treat live-patched systems as you treat immutable infrastructure, and I'm not convinced of that argument either. The former is just more complex, and complexity attracts failures.<p>Ultimately I think what you'll find is Bottlerocket will get a niche following, but also some people will get annoyed by it and go back to regular distros which already have well defined methods.
So, is this available as an ISO? I currently run an Ubuntu VM in bhyve on my FreeNAS home-server to host various containers for experiments, etc... could I run this instead or is it tied to AWS?
At my previous company we discarded the AWS-Linux distro and used rancherOS for container hosting because the version of yum they used was too flaky. They were unwilling to move to DNF to try and fix it. We've long badgered them for something like this (rancher style AWS-Linux dsitro) and they seem to have finally listened. Too bad, I moved to a different company and a different role to benefit from this. At least my old colleagues will be happy
Wasn't Amazon Linux 2 something similar? Or I'm mixing it up <a href="https://aws.amazon.com/amazon-linux-2/" rel="nofollow">https://aws.amazon.com/amazon-linux-2/</a>
Can anyone at AWS comment on how this fits into the Fargate roadmap and Compute pricing? Presumably, a slimmer OS for things like EKS nodes could translate into some sort of Compute discount.
So, I'm a little confused; is this not what NixOS is all about, or is there a difference? (as my question probably suggests, im not all that knowledgeable about nix)
I wish the push to containerise everything would just die.<p>I need the pieces of my system to work together, not against one another, not contend for files and permissions.