TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Introducing the Fan – simpler container networking

142 pointsby TranceManalmost 10 years ago

12 comments

tobbybalmost 10 years ago
This looks unbelievably simple, on the lines of why hasn&#x27;t it been done before.<p>So you ping 10.3.4.16 and your host automatically &#x27;knows&#x27; to just send it to 17.16.4.16 where lying in wait, the receiving host simply forwards it to 10.3.4.16. I like it.<p>This is a vexing problem for containers and even VM networking. If they are in a NAT you need to create a mesh of tunnels across hosts, or you create a flat network so they are all on the same subnet. But you can&#x27;t do this for containers on the cloud with a single IP and limited control of the networking layer.<p>Current solutions include L2 overlays, L3 overlays, a big mishmash of GRE and other type of tunnels, or VXLAN multicast unavailable in most cloud networks, or proprietary unicast implementations. It&#x27;s a big hassle.<p>Ubuntu have taken a simple approach, no per node database to maintain state and uses commonly used networking tools. And more importantly it seems fast. And it&#x27;s here and now. That 6gbps suggests this does not compromise performance like a lot of other solutions tend to do. It won&#x27;t solve all multi-host container networking use cases but will address many.
评论 #9769023 未加载
评论 #9767452 未加载
评论 #9768813 未加载
frequentalmost 10 years ago
I would want to object and say &quot;There is IPv6 in the cloud!&quot;.<p>We have developed re6stnet in 2012. You can use it to create an ipv6 network on top of an existing ipv4 network. It&#x27;s open source and we are using it ourselves internally and in client implementations since then.<p>I wrote a quick blogpost on it: <a href="http:&#x2F;&#x2F;www.nexedi.com&#x2F;blog&#x2F;blog-re6stnet.ipv6.since.2012" rel="nofollow">http:&#x2F;&#x2F;www.nexedi.com&#x2F;blog&#x2F;blog-re6stnet.ipv6.since.2012</a><p>The repo is here in case anyone is interested: <a href="http:&#x2F;&#x2F;git.erp5.org&#x2F;gitweb&#x2F;re6stnet.git&#x2F;tree&#x2F;refs&#x2F;heads&#x2F;master?js=1" rel="nofollow">http:&#x2F;&#x2F;git.erp5.org&#x2F;gitweb&#x2F;re6stnet.git&#x2F;tree&#x2F;refs&#x2F;heads&#x2F;mast...</a>
评论 #9767605 未加载
评论 #9768968 未加载
kbakeralmost 10 years ago
Sorry, I just gotta rant a bit... this is a really bad hack, that I wouldn&#x27;t trust on a production system. Instead of doubling down and working on better IPv6 support with providers and in software configuration, and defining best practices for working with IPv6, they just kinda gloss over with a &#x27;not supported yet&#x27; and develop a whole system that will very likely break things in random ways.<p>&gt; More importantly, we can route to these addresses much more simply, with a single route to the “fan” network on each host, instead of the maze of twisty network tunnels you might have seen with other overlays.<p>Maybe I haven&#x27;t seen the other overlays (they mention flannel), but how does this not become a series of twisty network tunnels? Except now you have to manually add addresses (static IPv4 addresses!) of the hosts in the route table? I see this as a huge step backwards... now you have to maintain address space routes amongst a bunch of container hosts?<p>Also, they mention having up to 1000s of containers on laptops, but then their solution scales only to 250 before you need to setup another route + multi-homed IP? Or wipe out entire &#x2F;8s?<p>&gt; If you decide you don’t need to communicate with one of these network blocks, you can use it instead of the 10.0.0.0&#x2F;8 block used in this document. For instance, you might be willing to give up access to Ford Motor Company (19.0.0.0&#x2F;8) or Halliburton (34.0.0.0&#x2F;8). The Future Use range (240.0.0.0&#x2F;8 through 255.0.0.0&#x2F;8) is a particularly good set of IP addresses you might use, because most routers won&#x27;t route it; however, some OSes, such as Windows, won&#x27;t use it. (from <a href="https:&#x2F;&#x2F;wiki.ubuntu.com&#x2F;FanNetworking" rel="nofollow">https:&#x2F;&#x2F;wiki.ubuntu.com&#x2F;FanNetworking</a>)<p>Why are they reusing IP address space marked &#x27;not to be used?&#x27; Surely there will be some router, firewall, or switch that will drop those packets arbitrarily, resulting in very-hard-to-debug errors.<p>--<p>This problem is already solved with IPv6. Please, if you have this problem, look into using IPv6. This article has plenty of ways to solve this problem using IPv6:<p><a href="https:&#x2F;&#x2F;docs.docker.com&#x2F;articles&#x2F;networking&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.docker.com&#x2F;articles&#x2F;networking&#x2F;</a><p>If your provider doesn&#x27;t support IPv6, please try to use a tunnel provider to get your very own IPv6 address space.<p>like <a href="https:&#x2F;&#x2F;tunnelbroker.net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;tunnelbroker.net&#x2F;</a><p>Spend the time to learn IPv6, you won&#x27;t regret it 5-10 years down the road...
评论 #9768479 未加载
评论 #9767045 未加载
评论 #9767831 未加载
评论 #9767252 未加载
ademarrealmost 10 years ago
I&#x27;d like to see a better explanation of how this compares to the various Flannel backends (<a href="https:&#x2F;&#x2F;github.com&#x2F;coreos&#x2F;flannel#backends" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;coreos&#x2F;flannel#backends</a>), and also how this would be plugged into a Kubernetes cluster.
评论 #9767594 未加载
regularfryalmost 10 years ago
Or you could go somewhere with IPv6. The number of places with an IPv4-only restriction is only going to drop.
评论 #9767421 未加载
rsyncalmost 10 years ago
&quot;Also, IPv6 is nowehre to be seen on the clouds, so addresses are more scarce than they need to be in the first place.&quot;<p>We&#x27;ve[1] had ipv6 addressable cloud storage since 2006.<p>Currently our US (Denver), Hong Kong (tsuen kwan o) and Zurich locations have working ipv6 addresses.<p>[1] You know who we are.
评论 #9769172 未加载
paulasmuthalmost 10 years ago
I don&#x27;t seem to get it. How is this different from just using a non-routed IP per container?
评论 #9767437 未加载
评论 #9767011 未加载
评论 #9767031 未加载
评论 #9766872 未加载
apialmost 10 years ago
Probably doesn&#x27;t matter much here, but 240.0.0.0&#x2F;4 is <i>hard coded to be unusable</i> on Windows systems. It&#x27;s in the Windows IP stack somewhere. Packets to&#x2F;from that network will simply be dropped.
评论 #9769505 未加载
评论 #9772214 未加载
stephengilliealmost 10 years ago
I&#x27;ve read the article twice. Did they just reinvent putting DHCP behind a NAT? What does that combination of systems not do that Fan does?<p><pre><code> *Remap 50 addresses from one range to another. *Dynamically assign those addresses to servers. *Special Something that Fan does. </code></pre> What&#x27;s the benefit of using a full class A subnet when you are only using 250 addresses?
评论 #9772194 未加载
gekualmost 10 years ago
Seems to be a smart solution but it only works when you have control over the &quot;real&quot; &#x2F;16 network if I understand it correct? E.g. having multiple nodes on multiple cloud providers with completely different IP addresses not in the same &#x2F;16 network will not work, correct?
GauntletWizardalmost 10 years ago
Why do people keep giving whole IP addresses to every little container? It&#x27;s a terrible management paradigm compared to service-discovery and using hostports for every address.
评论 #9767407 未加载
评论 #9767168 未加载
评论 #9767258 未加载
rcarmoalmost 10 years ago
This is very neat indeed, and I&#x27;d love to try it out, but the launchpad links are broken. Anyone know where I can get the package for Ubuntu armhf? Or the source?