Unfortunately, this reads like a 100 foot marketing document for Sysdig, not actual container security best practices.<p>If you want to look at actual container security best practices, check out CIS [1] & DISA [2], and NSA [3], with some theory at NIST [4], as well as the documentation from your preferred cloud vendors, be it AWS, Azure, GCP, or other, as well as the specific container security practices.<p>[1] <a href="https://www.cisecurity.org/" rel="nofollow">https://www.cisecurity.org/</a><p>[2] <a href="https://public.cyber.mil/stigs/downloads/" rel="nofollow">https://public.cyber.mil/stigs/downloads/</a><p>[3] <a href="https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/0/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF" rel="nofollow">https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/0/CTR...</a><p>[4] <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-190.pdf" rel="nofollow">https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.S...</a>
Perhaps I overlooked it, but it seems strange there's nothing about making containers immutable and read-only. This is a powerful tool IMO.<p><a href="https://cloud.google.com/architecture/best-practices-for-operating-containers#ensure_that_your_containers_are_stateless_and_immutable" rel="nofollow">https://cloud.google.com/architecture/best-practices-for-ope...</a>
My home k8s cluster is now "locked down" using micro-vms (kata-containers[0]), pod level firewalling (cilium[1]), permission-limited container users, mostly immutable environments, and distroless[2] base images (not even a shell is inside!). Given how quickly I rolled this out; the tools to enhance cluster environment security seem more accessible now than my previous research a few years ago.<p>I know it's not exactly a production setup, but I really do feel that it's atleast the most secure runtime environment I've ever had accessible at home. Probably more so than my desktops, which you could argue undermines most of my effort, but I like to think I'm pretty careful.<p>In the beginning I was very skeptical, but being able to just build a docker/OCI image and then manage its relationships with other services with "one pane of glass" that I can commit to git is so much simpler to me than my previous workflows. My previous setup involved messing with a bunch of tools like packer, cloud-init, terraform, ansible, libvirt, whatever firewall frontend was on the OS, and occasionally sshing in for anything not covered. And now I can feel even more comfortable than when I was running a traditional VM+VLAN per exposed service.<p>[0] <a href="https://github.com/kata-containers/kata-containers" rel="nofollow">https://github.com/kata-containers/kata-containers</a><p>[1] <a href="https://github.com/cilium/cilium" rel="nofollow">https://github.com/cilium/cilium</a><p>[2] <a href="https://github.com/GoogleContainerTools/distroless" rel="nofollow">https://github.com/GoogleContainerTools/distroless</a>
The thing that kills me about all of this is how hard it is to do it right. I wish there were a dumbed down version of containers and orchestrators for people trying to do basic multi-tenant compute in a SaaS and don't care a ton about the best performance.<p>Would I be generally ok if I use gvisor to give a shell environment to customers and just keep the host up to date?<p>Or is using containers just relatively pointless for multitenant compute in a SaaS compared to giving customers virtual machines?<p>If you can't imagine the kind of SaaS I'm talking about, think something along the lines of Github's new online IDE, CodeSpaces.
Calling your guide the ‘ultimate guide’ is disingenuous marketing. No single guide can cover all security concepts in all contexts. Every time I see that sorta wording I just assume the writer doesn’t actually know what they’re talking about
I'm always a bit confused about the CPU limit (for the pod), some guides (and tools) advice to always set one, but this one [0] doesn't.
Ops people I worked with almost always want to lower that limit and I have to insist for raising it (no way they disable it).
Is there an ultimate best practice for that?<p>[0] <a href="https://learnk8s.io/production-best-practices" rel="nofollow">https://learnk8s.io/production-best-practices</a>
Curious to know whether anyone here can speak to how much safer Hyper V isolation[1] is than process isolation and whether it negates some of the concerns in the article.<p>1. <a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container" rel="nofollow">https://docs.microsoft.com/en-us/virtualization/windowsconta...</a>
Production host root fs should be mounted ro. Check out Linux IMA and how to only allow specific executables by hash. Centrally forward container logs. Use a VCS for container/workload templates and routinely audit for misconfig. Sysdig/falco and related tools are nice, but containers and their prod hosts are easier to harden