I hope all of these Docker overlay networks start using the in-kernel overlay network technologies soon. User-space promiscuous capture is obscenely slow.<p>Take a look at GRE and/or VXLAN and the kernels multiple routing table support. (This is precisely why network namespaces are so badass btw). Feel free to ping me if you are working on one of these and want some pointers on how to go about integrating more deeply with the kernel.<p>It's worth mentioning these protocols also have reasonable hardware offload support, unlike custom protocols implemented on UDP/TCP.
This looks like a great idea. For me this was a missing piece two months ago when playing with Docker.<p>However I have strong doubts about the network performance, not only the overhead of the UDP encapsulation (that should be quite small), but mostly the capturing of packets with pcap and then handling them in user-mode. Looks like a lot of context-switches, copying and parsing with non-optimal code paths. Are there any benchmarks available?<p>My feeling is that this will consume large amounts of CPU for moderate network loads and thus be unusable with most NoSQL kind of systems that benefit from clustering across hosts?
This seems very nice. What would be the pros and cons of using Weave instead of Tinc ? I have used Tinc for a while[0] and, the end result looks very similar (i.e. there is not a nice command-line tool dedicated to use Tinc with Docker, but the high level description match).<p>[0]: <a href="https://gist.github.com/noteed/11031504" rel="nofollow">https://gist.github.com/noteed/11031504</a>
as someone who is more developer than ops, I feel like the docker stuff is still changing fast and that the way you would use docker today will be very different a year from now; but that containers seem to be the way of the future - if I have no pressing need to change my server architecture does it make sense to wait for things settle or would it be more beneficial to get in and learn now and experience the changes and why they were necessary?
This is really interesting. I've been looking for a way to build in support for networking between Docker hosts in my clocker.io software, to simplify deploying applications into a cloud hosted Docker environment. I'd been young with adding Open vSwitch, but am going to try weave as the network layer in the next release. Will there be any problems running in a cloud where I have limited control over the configuration of the host network interfaces and the traffic they can carry, such as AWS only allowing TCP and UDP between VMs?
Question for weavenetwork: are containers addressable by hostname from other containers? Is there a good way to do that? I didn't see anything about it in the readme.<p>I suppose service discovery is out-of-scope for this project but having some sort of weave-wide hostsfile would certainly simplify it. Am I misunderstanding the project?
Has anyone compared this to rudder (<a href="https://coreos.com/blog/introducing-rudder/" rel="nofollow">https://coreos.com/blog/introducing-rudder/</a>)?