One does not simply go from a flat network to overlays. Overlays are slow, difficult, cause really odd failures and are often hilariously immature. They are the experimental graph database of the network world.<p>Just have a segregated network, and let the VPC/dhcp do all the hard stuff.<p>Have your hosts on the default VLAN(or Interface if your cloudy), with its own subnet (Subnets should only exist in one VLAN.) Then if you are in cloud land, have a second network adaptor on a different subnet. If you are running real steel, then you can use a bonded network adaptor with multiple VLANs on the same interface. (The need for a VLAN in a VPC isn't that critical because there are other tools to impose network segregation.)<p>Then use macvtap, or macvlan(or which ever thing that gives each container a macaddress) to give each container its own IP. This means that your container is visible on that entire subnet, either inside the host or without.<p>There is no need to faff with routing, it comes for free with your VPC/network or similar. Each container automatically has a hostname, IP, route. It will also be fast. As a bonus it call cane be created at the start using cloudformation or TF.<p>You can have multiple adaptors on a host, so you can separate different classes of container.<p>Look, the more networking that you can offload to the actual network the better.<p>If you are ever re-creating DHCP/routing/DNS in your project, you need to take a step back and think hard about how you got there.<p>70% of the networking modes in k8s are batshit insane. a large amount are basically attempts at vendor lock in, or worse someone's experiment thats got out of hand. I know networking has always been really poor in docker land, but there are ways to beat the stupid out of it.<p>The golden rule is this:<p>Always. Avoid. Network. Overlays.
Site is having issues atm... but I'll throw something out there I'd really like to see.<p>We encrypt 100% of our machine-to-machine traffic at the TCP level. There's a lot of shuffling of certs around to get some webapp to talk to postgres, then have that webapp serve https to haproxy, etc.<p>I'd be awesome if there was a way your cloud servers could just talk to each other using wiregaurd by default. We looked at setting it up, but it'd need to be automated somehow for anything above a handful of systems :/
In my mind, a "layer 2 subnet" really doesn't mean anything. Subnets are things that happen in IP, that is, layer 3, and layer 2 is the physical connection, ie. Ethernet or WLAN, which don't have the concept of subnets.<p>Edit: also the OSI layer model was specified in the eighties, and isn't all that accurate in 2019 to describe how our networks actually work.
This article uses Quagga - they really should be using FRRouting, which was forked from Quagga in 2017 by the core Quagga developers and has 4 times as many commits (16000[0] vs 4000[1]), far more features, bugfixes, etc. Quagga has been dead for over a year.<p>[0] <a href="https://github.com/FRRouting/frr" rel="nofollow">https://github.com/FRRouting/frr</a><p>[1] <a href="http://gogs.quagga.net/Quagga" rel="nofollow">http://gogs.quagga.net/Quagga</a>
"Vxlan uses multicast which is often not supported on most cloud networks. So its best used on your own networks."<p>Not entirely correct.<p>Linux has had unicast vxlan for quite some time.<p>Flannel is doing unicast and works pretty much anywhere.<p>See "Unicast with dynamic L3 entries" section:
<a href="https://vincent.bernat.ch/en/blog/2017-vxlan-linux" rel="nofollow">https://vincent.bernat.ch/en/blog/2017-vxlan-linux</a>