That's a valid way to look at it, but there are other ways. Containers are also a simple, practical way to bundle applications and their dependencies in a relatively standardized way, so they can be run on different compute fabrics.<p>That sense of the term isn't loaded with any specific notion of how attack surfaces should work. I think modern "Docker"'s security properties are underrated†. But you still can't run multitenant workloads from arbitrary untrusted tenants on shared-kernel isolation. It turns out to be pretty damned useful to be able to ship application components as containers, but have them run as VMs.<p>† <a href="https://fly.io/blog/sandboxing-and-workload-isolation/" rel="nofollow">https://fly.io/blog/sandboxing-and-workload-isolation/</a>
Should be noted that a portion of this (valid) criticism applies specifically to the most prominent "container" implementation; Docker. Not containers as a whole.<p>For example resources isolation with the Solaris / Illumos container implementation (zones) works just as well as full blown virtualization. You are just as well equipped to handle noisy neighbors with zones as you are with hardware VM's.<p>> Much as you’d likely choose to live in a two-bedroom townhouse over a tent, if what you need is a lightweight operating system, containers aren’t your best option.<p>So I think this is true for Docker but doesn't really do justice to other container implementations such as FreeBSD jails and Solaris / Illumos zones. Because those containers are really just lightweight operating systems.<p>In the end Docker started out and was designed to be a deployment tool. Not necessarily an isolation tool in all aspects. And yeah, it shows.
Enjoyed the article but having watched containerization and kubernetes maturing over the last 5 years (especially at an enterprise level), I'd say a huge part of the value proposition is (and this applies more to K8) it really catalyses prototyping/experimenting and (depending on the org I suppose) promotes a lot of autonomy for app teams who'd historically have to log calls to infrastructure to get compute, network/lb/dns, databases et al. built up before kicking the tyres on something. I've seen those types of things take months in large orgs. And then there's the inevitable drift between tiered environments that happens over time in richer operating environments (I've seen VMs so laden with superfluous monitoring and agentware they fall over all the time, while simultaneously being on completely different OS and patch versions from dev to prod). Containers provide immutability at the service layer, so I have confidence in at least having that level of parity between dev and prod (albeit hardly ever at a data or network layer).
I believe, that success of containers is not because of lightweightness or other isolation properties of them.<p>Containers won dev mindshare because of ease packaging and distribution of the artifacts. Somehow it is Docker, not VM vendors came up with a standard for packaging, distributing and indexing for glorified tarballs and it quickly picked up.
IMO comparing containers to an apartment is more accurate than a tent.<p>Because with an apartment each tenant gets to share certain infrastructure like heating and plumbing from the apartment building, just like containers get to the share things from the Linux host they run on. In the end both houses and apartments protect you from outside guests, just in their own way.<p>I went into this analogy in my Dive into Docker course. Here's a video link to this exact point: <a href="https://youtu.be/TvnZTi_gaNc?t=427" rel="nofollow">https://youtu.be/TvnZTi_gaNc?t=427</a>, that video was recorded back in 2017 but it still applies today.
This is oversimplifying containers and VM by using the house vs tent analogy. Just talking about Docker weakens this too, because Docker is not the only way to setup containers.<p>> Tents, after all, aren’t a particularly secure place to store your valuables. Your valuables in a tent in your living room, however? Pretty secure.<p>Containers do provide strong security features, and sometimes the compromises you have to make hosting something on a VM vs. a container will make the container more secure.<p>> While cgroups are pretty neat as an isolation mechanism, they’re not hardware-level guarantees against noisy neighbors. Because cgroups were a later addition to the kernel, it’s not always possible to ensure they’re taken into account when making system-wide resource management decisions.<p>Cgroups are more than a neat resource isolation mechanism, they work. That's really all there is to it.<p>Paranoia around trusting the Linux kernel is unnecessary if at the end of the day you end up running Linux in production. If anything breaks, security patches will come quick and the general security attitude of the Linux community is improving everyday. If you are really paranoid, perhaps run BSD, use grsec, or the best choice is to use SELinux IMO.<p>If anything, you will be pwned because you have a service open to the world, not because cgroups or containers let you down.
Modern containers do provide lots of security features with namespaces, seccomp, cgroups (to some extent)<p>The author seems to largely ignore this. I would consider that a bit stronger than a "tent wall". Comparing it to a tent seems more akin to a plain chroot.<p>If I have my tent right next to someone else, I can trivially "IPC" just speaking out loud which would be prevented by an IPC namespace (which is Docker's current default container setup)<p>Also worth mentioning, turning a container into a VM (for enhanced security) is generally easier than trying to do the opposite. AWS Lambda basically does that as do a lot of the minimal "cloud" Linux distributions that just run Docker with a stripped down userland (like Container Linux and whatever its successors are)
>Finally, there’s the whole business of resource isolation. While cgroups are pretty neat as an isolation mechanism, they’re not hardware-level guarantees against noisy neighbors. Because cgroups were a later addition to the kernel, it’s not always possible to ensure they’re taken into account when making system-wide resource management decisions.<p>I don't think virtualization really offers hardware-level guarantees against noisy neighbours either.
Starts off saying VMs are like brick and mortar houses and containers are like tents.<p>I agree somewhat but there has been significant progress to sandbox containers with the same security we'd expect from a VM. It isn't a ridiculous idea that VMs will one day be antiquated, but probably won't happen for a few more years.
> We don’t expect tents to serve the same purpose as brick-and-mortar houses—so why do we expect containers to function like VMs?<p>Marketing. Because of Marketing.
Am I the only one getting tired of people stating confidently that containers don't improve safety _at all_ because they run on the same kernel? It's just not true.
I place all my tents in a house (Docker VMs inside unprivileged LXC containers on Proxmox - yes, unprivileged = not a brick house, more like wood).<p>The only reason I use Docker is that I can access the system design knowledge that is available with docker-compose.yml's. Last example: Gitlab. Could not get it running on unrivileged LXC using the official installation instructions, with Docker it was simply editing the `.env` and then `docker-compose up -d`. All of this in a local, non-public (ipsec-distributed) network. I often find myself creating a single, separate unprivileged LXC container->Docker nesting for each new container, because I do not need to follow the complicated and error prone installation instructions for native installs.
Container tech can be used for small scale "pet" deployment, but my understanding is that the true benefit of containers come with seeing them as "cattle".<p>You should never login to the shell of a container for config. All application state lives elsewhere, and any new commit to your app triggers a new build.<p>If that's not for you, then while containers like proxmox/LXC can still be handy, you're just doing VM at a different layer.<p>The article was a bit hand wavey about how "they" complain about containers, and then uses the analogy more than explaining the problems and solutions.
I’ve found systemd-nspawn useful. Use debootstrap to install a minimal Debian system inside your system, then boot it with this command. It isolates the filesystem while sharing the network interface, and is convenient for most things that I guess people use Docker for. I wonder why it’s not mentioned more often.
I explain that if the an Amazon Virtual Private Cloud (VPC) is a datacenter "cloud", then a container implementation is a "puff".<p>Virtualizing the kernel like the Amazon Machine Image (AMI) virtualizes a chip core sounds great. But now, in the "puff", all of those networking details that AWS keeps below the hypervizor line confront us. Storage, load balancing, name services, firewalls. . .<p>Containers can solve packaging issues, but wind up only relocating a vast swath of other problems.
If VM is like a nuclear war bunker, containers are like brick and mortar houses. They are not air tight and have glass windows which can be easily broken, but that's where most people live. They're cheaper to build, comfortable enough, and can last a human lifetime most of the time.<p>An analogy can go a long way. Both ways.
I've seen the problems of treating containers as houses, primarily during development: Multiple different processes inside a single container, with a wrapper around them (inside the container) that makes it even more difficult to debug.<p>So, assuming I understood correctly, treating them like tents is infinitely the better choice.
I was totally expecting this to go in the direction about tech debt with a homeless analogy, but it was about destructability. Yes we know this already and if you catch people treating it as a persistent host, slap their hands and say no.
I'd be curious to see services designed to run as PID 1 inside containers, and contain or run nothing else other than the required binaries. Maybe someone is doing this.
containers are cattle, VMs were pets. If one does not get the operational differences nor understands that these are completely two different usescases then probably should not work in IT industry