TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

One way to make containers network: BGP

92 pointsby bartbesalmost 9 years ago

15 comments

chrissnellalmost 9 years ago
It&#x27;s really not that difficult to network containers. We&#x27;re using flannel [1] on CoreOS. We&#x27;re using flannel&#x27;s VXLAN backend to encapsulate container traffic. We&#x27;re Kubernetes users so every kube pod [2] gets it&#x27;s own subnet and flannel handles the routing between those subnets, across all CoreOS servers in the cluster.<p>I was skeptical when we first deployed it but we&#x27;ve found it to be dependable and fast. We&#x27;re running it in production on six CoreOS servers and 400-500 containers.<p>We did evaluate Project Calico initially but discovered some performance tests that tipped the scales in favor of flannel. [3] I don&#x27;t know if Calico has improved since then, however. This was about a year ago.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;coreos&#x2F;flannel" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;coreos&#x2F;flannel</a><p>[2] A Kubernetes pod is one or more related containers running on a single server<p>[3] <a href="http:&#x2F;&#x2F;www.slideshare.net&#x2F;ArjanSchaaf&#x2F;docker-network-performance-in-the-public-cloud" rel="nofollow">http:&#x2F;&#x2F;www.slideshare.net&#x2F;ArjanSchaaf&#x2F;docker-network-perform...</a>
评论 #12110795 未加载
评论 #12110889 未加载
chris_marinoalmost 9 years ago
Another solution to this problem is Romana [1] (I am part of this effort). It avoids overlays as well as BGP because it aggregate routes. It uses its own IP address management (IPAM) to maintain the route hierarchy.<p>The nice thing about this is that nothing has to happen for a new pod to be reachable. No &#x2F;32 route distribution or BGP (or etcd) convergence, no VXLAN ID (VNID) distribution for the overlay. At some scale, route and&#x2F;or VNID distribution is going to limit the speed at which new pods can be launched.<p>One other thing not mentioned in the blog post or in any of these comments is network policy and isolation. Kubernetes v1.3 includes the new network APIs that let you isolate namespaces. This can only be achieved with a back end network solution like Romana or Calico (some others as well).<p>[1] romana.io
crbalmost 9 years ago
On the topic of &quot;why do we need a distributed KV store for an overlay network?&quot; from the blog: there&#x27;s a good blog post about why Kubernetes doesn&#x27;t use Docker&#x27;s libnetwork.<p><a href="http:&#x2F;&#x2F;blog.kubernetes.io&#x2F;2016&#x2F;01&#x2F;why-Kubernetes-doesnt-use-libnetwork.html" rel="nofollow">http:&#x2F;&#x2F;blog.kubernetes.io&#x2F;2016&#x2F;01&#x2F;why-Kubernetes-doesnt-use-...</a>
评论 #12111396 未加载
评论 #12112030 未加载
jlgaddisalmost 9 years ago
BGP seems a needlessly complex solution to this problem. VXLAN would, IMO, be a much better fit.<p>(--Network engineer who manages BGP for an ISP)
评论 #12112389 未加载
评论 #12111638 未加载
mrmondoalmost 9 years ago
We&#x27;re just about to switch to BGP internally using Calico (mentioned in another comment, I believe performance is good now), we run around 300-600 containers currently using our implementation using Consul+Serf. We&#x27;ll drop a talk on it once we&#x27;ve made the switch if anyone is interested. We&#x27;re deliberately avoiding flannel because of the tunnelled networking and added complexity that we don&#x27;t feel we want to introduce.
e12ealmost 9 years ago
I&#x27;ve for a long time wondered if anyone has successfully just gone full ipv6 only with a substantial container&#x2F;vm roll-out. On paper it should have:<p>1) enough addresses. Just enough. For everything. For everyone. Google-scale enough.<p>2) Good out-of-the box dynamic assignment of addresses.<p>And finally, optional integration with ipsec, which I get might in the end be over-engineered, and under-used -- but wouldn&#x27;t it be nice if you could just trust the network (you&#x27;d still have to bootstrap trust somehow, probably running your own x509 CA -- but how nice to be able to flip open any book on networking from the 80s and just replace the ipv4 addressing with ipv6 and just go ahead and use plain rsh and &#x2F;etc&#x2F;allow.hosts as your entire infrastructure for actually secure intra-cluster networking -- even across data-centres and what not. [ed: and secure nfsv3! wo-hoo]).<p>But anyway, have anyone actually done this? Does it work (for a meaningfully large value of work)?
评论 #12112939 未加载
lobster_johnsonalmost 9 years ago
BGP looks really complex. Isn&#x27;t OSPF (BGP&#x27;s &quot;little brother&quot;) a much attractive choice here? It&#x27;s still complex, but should be much simpler.<p>Another attractive alternative to Flannel is Weave [1], run in the simpler non-overlay mode. In this mode, it won&#x27;t start a SDN, but will simply act as a bridge&#x2F;route maintainer, similar to Flannel.<p>[1] <a href="https:&#x2F;&#x2F;www.weave.works&#x2F;products&#x2F;weave-net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.weave.works&#x2F;products&#x2F;weave-net&#x2F;</a>
评论 #12112083 未加载
评论 #12112340 未加载
delinkaalmost 9 years ago
Have I misunderstood something here? We don&#x27;t BGP on a local networks. Via ARP, a node says &quot;who has $IP?&quot; Something answers with a MAC address. The packet for $IP is wrapped in an Ethernet frame for that MAC address. If the IP isn&#x27;t local to your network, your router answers with its own MAC, and the packet is framed up for the router.<p>BGP is the process by which ranges of IPs are claimed by routers. Is Calico really used by docker containers in this way?
评论 #12112250 未加载
评论 #12112422 未加载
评论 #12112071 未加载
评论 #12112979 未加载
评论 #12112155 未加载
sargunalmost 9 years ago
More on this here: <a href="https:&#x2F;&#x2F;medium.com&#x2F;@sargun&#x2F;a-critique-of-network-design-ff8543140667#.2fwstossu" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@sargun&#x2F;a-critique-of-network-design-ff85...</a> -- BGP isn&#x27;t just about containers. It&#x27;s about signaling. It&#x27;s a mechanism for machines to influence the flow of traffic in the network.<p>This isn&#x27;t container weirdness. This is because networks got stuck in 2008. We still don&#x27;t have have IPv6 SLAAC. Many of us made the jump to layer 3 clos fabrics, but stopped after that. My belief is because AWS EC2, Google GCE, Azure Compute, and others consider this the gold standard.<p>IPv6 natively supports autoconfiguring multiple IPs per NIC &#x2F; machine automagically*. This is usually on by default as part of the privacy extensions, so in conjunction with SLAAC, you can cycle through IPs quickly. It also makes multi-endpoint protocols relevant.<p>Containers and bad networking because of the lack of IP &#x2F; container is a well-known problem, it&#x27;s even touched on in the Borg paper, briefly: One IP address per machine complicates things. In Borg, all tasks on a machine use the single IP address of their host, and thus share the host’s port space. This causes a number of difficulties: Borg must schedule ports as a resource; tasks must pre-declare how many ports they need, and be willing to be told which ones to use when they start; the Borglet must enforce port isolation; and the naming and RPC systems must handle ports as well as IP addresses.<p>Thanks to the advent of Linux namespaces, VMs, IPv6, and software-defined networking, Kubernetes can take a more user-friendly approach that eliminates these complications: every pod and service gets its own IP address, allowing developers to choose ports rather than requiring their software to adapt to the ones chosen by the infrastructure, and removes the infrastructure complexity of managing ports.<p>But, I ask, what&#x27;s wrong with the Docker approach of rewriting ports? Reachability is our primary concern, and I&#x27;m unfortunately BGP hasn&#x27;t become the lingua franca for most networks (&quot;The Cloud&quot;). I actually think ILA (<a href="https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;draft-herbert-nvo3-ila-00#section-4.5" rel="nofollow">https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;draft-herbert-nvo3-ila-00#sectio...</a>) &#x2F; ILNP (RFC6741) are the most interesting approaches here.
评论 #12112095 未加载
NetStrikeForcealmost 9 years ago
Or you could NAT on the host and deploy simpler overlay networking: <a href="https:&#x2F;&#x2F;github.com&#x2F;pjperez&#x2F;docker-wormhole" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pjperez&#x2F;docker-wormhole</a><p>You can deploy this on any machine (container or not) and have it always reachable from other members of the same network, which could be e.g. servers on different providers (AWS, Azure, Digital Ocean, etc)
评论 #12112390 未加载
tptacekalmost 9 years ago
Especially since there isn&#x27;t really a policy-routing component to this, isn&#x27;t BGP pretty _extremely_ complicated for the problem Calico is trying to solve?<p>Stipulating that you need a routing protocol here (you don&#x27;t, right? You can do proxy ARP, or some more modern equivalent of proxy ARP.), there&#x27;s a whole family of routing protocols optimized for this scenario, of which OSPF is the best-known.
评论 #12111818 未加载
评论 #12112360 未加载
cthalupaalmost 9 years ago
There&#x27;s a lot of misinformation in this.<p>&gt;A Linux container is a process, usually with its own filesystem attached to it so that its dependencies are isolated from your normal operating system. In the Docker universe we sometimes talk like it&#x27;s a virtual machine, but fundamentally, it&#x27;s just a process. Like any process, it can listen on a port (like 30000) and do networking.<p>A container isn&#x27;t a process. It&#x27;s an amalgamation of cgroups and namespaces. A container can have many processes. Hell, use systemd-nspawn on a volume that contains a linux distro and your container is basically the entire userspace of a full system.<p>&gt;But what do I do if I have another computer on the same network? How does that container know that 10.0.1.104 belongs to a container on my computer?<p>Well, BGP certainly isn&#x27;t a hard requirement. Depending on how you&#x27;ve setup your network, if these are in the same subnet and can communicate via layer 2, you don&#x27;t need any sort of routing.<p>&gt;To me, this seems pretty nice. It means that you can easily interpret the packets coming in and out of your machine (and, because we love tcpdump, we want to be able to understand our network traffic). I think there are other advantages but I&#x27;m not sure what they are.<p>I&#x27;m not sure where the idea that calico&#x2F;BGP are required to look at network traffic for containers on your machine came from. If there&#x27;s network traffic on your machine, you can basically always capture it with tcpdump.<p>&gt; I find reading this networking stuff pretty difficult; more difficult than usual. For example, Docker also has a networking product they released recently. The webpage says they&#x27;re doing &quot;overlay networking&quot;. I don&#x27;t know what that is, but it seems like you need etcd or consul or zookeeper. So the networking thing involves a distributed key-value store? Why do I need to have a distributed key-value store to do networking? There is probably a talk about this that I can watch but I don&#x27;t understand it yet.<p>I think not at all understanding one of the major players in container networking is a good indication it might not yet be time to personally write a blog about container networking. Also absent is simple bridging.<p>Julia generally writes fantastic blogs, and I know she doesn&#x27;t claim to be an expert on this subject and includes a disclaimer about how this is likely to be more wrong than usual, but I feel like there was a lot of room for additional research to be done to produce a more accurate article. I understand the blog is mostly about what she has recently learned, and often has lots of questions unanswered... But this one has a lot of things that are answered, incorrectly :(
评论 #12111641 未加载
评论 #12113005 未加载
philip1209almost 9 years ago
The internal OpenDNS docker system, Quadra, relies on BGP for a mix on of on-prem and off-prem hosting:<p><a href="http:&#x2F;&#x2F;www.slideshare.net&#x2F;bacongobbler&#x2F;docker-with-bgp" rel="nofollow">http:&#x2F;&#x2F;www.slideshare.net&#x2F;bacongobbler&#x2F;docker-with-bgp</a>
otterleyalmost 9 years ago
The real problem is that cloud providers don&#x27;t provide out-of-the-box functionality to assign more than one IP to a network interface. If they did this, there wouldn&#x27;t even be an issue.<p>I&#x27;ve been requesting this feature from the EC2 team at AWS for some time about this, to no avail. You can bind multiple interfaces (ENIs) to an instance (up to 6, I think, depending on the instance size), each with a separate IP address, but not multiple IPs to a single interface.<p>BGP, flannel, vxlan, etc. are IMO a waste of cycles and add needless complexity to what could otherwise be a very simple architecture.
评论 #12112986 未加载
dozziealmost 9 years ago
Oh boy. And containers were supposed to make things <i>easier</i>.
评论 #12110701 未加载