Here's a cool technical and short article about the Clear Containers by someone worked on the project to bring you up to speed with what is going on: <a href="https://lwn.net/Articles/644675/" rel="nofollow">https://lwn.net/Articles/644675/</a>
If you want to try out Clear Containers with rkt you can easily do it on a linux physical machine. First, install rkt via deb/rpm[1] or tarball[2]<p>Then do:<p><pre><code> sudo rkt run --debug --insecure-options=image --stage1-name=coreos.com/rkt/stage1-kvm:1.25.0 docker://redis
</code></pre>
If you run into problem you can email rkt-dev[3].<p>[1] <a href="https://coreos.com/rkt/docs/latest/distributions.html#rpm-based" rel="nofollow">https://coreos.com/rkt/docs/latest/distributions.html#rpm-ba...</a>
[2] <a href="https://github.com/coreos/rkt/releases/tag/v1.25.0" rel="nofollow">https://github.com/coreos/rkt/releases/tag/v1.25.0</a>
[3] <a href="https://groups.google.com/forum/#!forum/rkt-dev" rel="nofollow">https://groups.google.com/forum/#!forum/rkt-dev</a>
I hadn't seen this project before. It looks really cool. I especially like the support for pushing network configuration at startup (via the "hyperstart" concept in v1 and systemd in v2). This is sorely lacking in Docker. You can accomplish it with pipework (which is just a wrapper around `ip exec` in the container netns), but then you need to write code in the container like "wait for interface XX to be up before running entry_point.sh"<p>My use case is creating containers with multiple interfaces and custom routing rules for each interface. Currently I am using pipework.sh to setup the interfaces and routes, but it's a dirty hack and increases container boot time due to the need to poll for interfaces to be up before starting the application. It looks like this "hyperstart"/systemd approach to namespace isolation avoids that latency, which is nice.<p>Unfortunately, according to these docs, each container interface requires a tap bridge in addition to the usual veth pair, due to qemu networking limitations. That's unfortunate, especially for containers with multiple interfaces, which is specifically my use-case.<p>Does anyone have an idea of the overhead of creating many tap interfaces within a container?
Pretty neat how nicely this integrates into an existing docker host setup. Definitely going to give it a try and see about integrating into containership.<p>edit: typo