I'm glad to see this since an easy overlay for Docker is badly needed. But ugh, userspace encapsulation. This would be a lot better if it used OVS + VXLAN.
Would this allow you to mesh together containers in separate datacenters? Or mesh together, say, the containers on your home PC with containers in the cloud? I'm guessing not.<p>What I'm really excited for are the possibilities of docker containers with public-routable IPv6 addresses. It would move the world away from "one host: many services on different arbitrary ports", and back to the "one host: one service, possibly speaking a few protocols with ports being used for OSI-layer-5/6 protocol discovery" model of the 1970s (and eliminate the madness of SRV records, besides.)<p>Imagine if, say, bitcoind (which normally speaks "JSON-RPC" to clients -- a specific layer-6 encoding over HTTP) sat on "bitcoind.host:80" instead of "host:8332". Suddenly, it'd be immediately clear to protocol clients (e.g. web browsers) which hosts they could or couldn't speak to, based on the port alone! The whole redundancy between schema and port in URLs could go away: they'd be synonymous. And so on.
Only recently did I realize what a power house the team at CoreOS is. They're building some really cool shit. I can spend hours on their blog just right-clicking and searching on Google. Definitely a good way to learn tons about distributed computing and that whole subject area.
Sorry, what problem does this solve?<p><i>Things are not as easy on other cloud providers where a host cannot get an entire subnet to itself. Rudder aims to solve this problem by creating an overlay mesh network that provisions a subnet to each server.</i> ... is unclear.<p>What host for virtualized infrastructure needs an entire, fake, non-internet-routable subnet that it cannot provision itself?<p>I believe there's a broken one size fits all network architectural assumption or provisioning methodology at the root of all this.<p>(Edit as reply to child as rate-limited: Sounds like I was right, and it's docker's fault. How is this not better solved with the standard approach of applying network namespaces and/or unique interfaces to containers?)
FYI there's already an open source software going by the name of Rudder : <a href="http://en.wikipedia.org/wiki/Rudder_(software)" rel="nofollow">http://en.wikipedia.org/wiki/Rudder_(software)</a>
"... it has almost no affect on the bandwidth." - looking at those numbers it's not the case at all, those numbers are really low to start with (as AWS isn't exactly the fastest) but obviously this would be much more noticeable at the higher end of the scale when we're talking about 100-200MB/s transfer rates, not to mention nearly doubling the latency!
also works great with lxc, i pushed a juju charm which automates the config for lxc <a href="http://bazaar.launchpad.net/~hazmat/charms/trusty/rudder/trunk/view/head:/readme.txt" rel="nofollow">http://bazaar.launchpad.net/~hazmat/charms/trusty/rudder/tru...</a>