We do "just run containers" for our entire CI pipeline. It's all lxc/lxd and just a bunch of shell scripts to start/stop them. Works surprisingly well. So if you are just using containers as a sandboxed work runner then you don't need anything fancy. The issue is that I think people would like to pretend that containers are just like VMs and this is where things start to break down.<p>They're not VMs in the sense that none of the tried and true methods for orchestrating VMs is available. You need new solutions for networking, new solutions for storage, new failover patterns, new tools for clustering and organizing them, new application patterns, etc. Basically all the stuff that would have been handled by the hypervisor and the software defined networking layer is now all of a sudden in your face and you need some way to deal with it.
> <i>As far as I can tell running containers without using Docker or Kubernetes or anything is totally possible today</i><p>It's been possible since before either of these existed. There are several container and orchestration systems that predate both.<p>My own pet faves are Garden (née Warden) and Diego, but that's probably because I work at the company (Pivotal) where they were born.
<p><pre><code> > let's say all my 50 containers share a bunch of files
(shared libraries like libc, Ruby gems, a base operating
system, etc.). It would be nice if I could load all those
files into memory just once, instead of 3 times.
</code></pre>
Correct me if I'm wrong, but doesn't this kind of situation seem like a poor use-case of containers? It seems to me that one of the main points of containerization is to encapsulate the runtime dependencies of a process. If you're conflating that by making two containers depend on the same runtime objects then the point of containerization has been lost. You might as well go back to a virtual machine. That's not to suggest that there are not circumstances where overlay networks and filesystems aren't useful, just that you should not be using them to manage dependencies.<p>Under this architecture, what happens when I want to update my applications to use a new version of a shared library? I either am forced to update all of my applications at once or I must modify the architecture and remove that shared dependency. This breaks down the composition that containerization promises.<p>I think that this advice should be re-examined. I am by no means an expert, but this doesn't seem smart to me...
I don't see why systemd is at the core of all those graphs. Why do we need that particular program to run containers? Or does systemd mean, in this context, "any daemon-controlling process"?
Can anyone deeply involved in the Hosting/Ops field, explain to me why LXC/LXD is ignored over the other options?<p>I see the top comment (dkarapetyan) mentions it, but you never really read of blogposts discussing how they scaled their LXC containers, etc.
> If I'm running 50 containers I don't want to have 50 copies of all my shared libraries in memory. That's why we invented dynamic linking!<p>BTW there's a cool feature called Kernel Samepage Merging [1] that was created for the sake of conserving memory consumed in virtualization or container use cases like this.<p>[1] <a href="https://www.kernel.org/doc/Documentation/vm/ksm.txt" rel="nofollow">https://www.kernel.org/doc/Documentation/vm/ksm.txt</a>
nspawn + btrfs is my preferred solution to the "50 containers" problem. The incantation you want is:<p><pre><code> systemd-nspawn --template="/path/to/subvolume" <other args>
</code></pre>
This creates a copy-on-write snapshot of the subvolume you supply, which is instantaneous. The --ephemeral flag can be used instead if you want the guest to be able to modify the base filesystem but you do not want those changes to persist across container boots.<p>Can someone describe what advantages rkt gives you over plain nspawn containers?
Picture from article, especially right part (docker > 1.11.0) is that true? [0]<p>I'm not software architect, but when I see this, it seems to me that something deeply wrong with implementation or with technology itself.<p>[0] <a href="http://jvns.ca/images/docker-rkt.png" rel="nofollow">http://jvns.ca/images/docker-rkt.png</a>