TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

IPvlan overlay-free Kubernetes Networking in AWS

144 pointsby theatrus2over 7 years ago

9 comments

ggmover 7 years ago
I know this invites some eye-rolling, but can somebody explain to me why the k8s people insist on ignoring IPv6 and the possibilities of large address fields?<p>Down the bottom, which is where &#x27;things we probably will never do&#x27; is when IPv6 comes in the door.<p>Azure (for instance) is a fully IPv6 enabled fabric. Microsoft &quot;get&quot; IPv6. They are all over it. They understand it, its baked into the DNA. So how come K8s people just kind of think &quot;yea.. nah.. not right now&quot;?<p>Because proxy Ipv6 at the edge is really sucky. We should be using native IPv6, preserve e2e under whatever routing model we need for reliability, and gateway the V4 through proxies in the longer term.<p>(serious Q btw)
评论 #15849530 未加载
评论 #15850605 未加载
评论 #15849829 未加载
评论 #15849985 未加载
评论 #15850681 未加载
评论 #15849509 未加载
deepakjoisover 7 years ago
Not directly related, but can someone recommend a beginners resource to understand Kubernetes networking? There are some good ones out there that explain basic Kubernetes concepts like pods, replicas etc. But networking seems to be a more complicated topic, and most intro guides skip over it.
评论 #15849226 未加载
评论 #15849347 未加载
评论 #15849434 未加载
评论 #15852578 未加载
评论 #15850233 未加载
muxatorover 7 years ago
For those wondering what&#x27;s the difference between macvlan and ipvlan, the main ipvlan paper [0] summarizes its raison d&#x27;être:<p>&gt; This is especially problematic where the connected next-hop e.g. switch is expecting frames from a specific mac from a specific port.<p>e.g.: if the host is attached to a managed switch with a strict security policy, macvlan would not work.<p>[0] <a href="https:&#x2F;&#x2F;www.netdevconf.org&#x2F;0.1&#x2F;sessions&#x2F;28.html" rel="nofollow">https:&#x2F;&#x2F;www.netdevconf.org&#x2F;0.1&#x2F;sessions&#x2F;28.html</a>
评论 #15850999 未加载
评论 #15850000 未加载
SEJeffover 7 years ago
Just wanted to give a shout out to kube-router[1], a really fantastic solution if you want to use BGP, that will soon support not needing bgp by implementing a featureset similar to flannel&#x27;s hostgw support. They are really good about addressing things in the open on their github[2]. BGP is, by definition, &quot;web scale&quot; as it runs most routing for the internet. Lower latency and much higher throughput than any sort of overlay network.<p>[1] <a href="https:&#x2F;&#x2F;www.kube-router.io" rel="nofollow">https:&#x2F;&#x2F;www.kube-router.io</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;cloudnativelabs&#x2F;kube-router" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cloudnativelabs&#x2F;kube-router</a>
评论 #15849423 未加载
paxyover 7 years ago
&gt; Announcing cni-ipvlan-vpc-k8s<p>Rolls right off the tongue, doesn&#x27;t it?
评论 #15849544 未加载
chris_marinoover 7 years ago
It all about trade offs. We&#x27;ve built a CNI for k8s and have looked into all of the techniques described. It seems that Lyft&#x27;s design is a direct reflection of their requirements.<p>To the extent your requirement match theirs, this could be a good alternative. The most significant in my mind is that it&#x27;s meant to be used in conjunction with Envoy. Envoy itself has its own set of design tradeoffs as well.<p>For example, Lyft currently uses &#x27;service-assigned EC2 instances&#x27;. Not hard to see how this starting point would influence the design. The Envoy&#x2F;Istio model of proxy per pod also reflects this kind of workload partitioning. Obviously, a design for a small number of pods (each with their own proxy) per instance is going to be very different from one that needs to handle 100 pods (and their IPs), or more, per instance.<p>Another is that k8s network policy can&#x27;t be applied since the &#x27;Kubernetes Services see connections from a node’s source IP instead of the Pod’s source IP&#x27;. But I don&#x27;t think this CNI is intended to work with any other network policy API enforcement mechanism. Romana (the project I work on) and the other CNI providers that use iptables to enforce network policy rely on seeing the pod&#x27;s source IP.<p>Again, this might be fine if you&#x27;re running Envoy. On the other hand, L3 filtering on the host might be important.<p>Also, this design requires that &#x27;CNI plugins communicate with AWS networking APIs to provision network resources for Pods&#x27;. This may or may not be something you want your instances to do.<p>FWIW, Romana lets you build clusters larger than 50 nodes without an overlay or more &#x27;exotic networking techniques&#x27; or &#x27;massive&#x27; complexity. It does it via simple route aggregation, completely standard networking.
评论 #15854489 未加载
bogomipzover 7 years ago
The author states:<p>&gt;&quot;Unfortunately, AWS’s VPC product has a default maximum of 50 non-propagated routes per route table, which can be increased up to a hard limit of 100 routes at the cost of potentially reducing network performance.&quot;<p>Could someone explain why increasing from 50 to 100 non-propagated routes in a VPC results in network performance degradation?
netingleover 7 years ago
IIUC ENIs are limited to 2 per host on small instances, 15 per host on larger ones. Doesn&#x27;t this approach limit the number of Pods per host? I&#x27;m already running about 20 pods per host, and I don&#x27;t more containers per host is atypical.
tamalsaha001over 7 years ago
How does it compare to AWS&#x27; own CNI plugin? <a href="https:&#x2F;&#x2F;github.com&#x2F;aws&#x2F;amazon-vpc-cni-k8s" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;aws&#x2F;amazon-vpc-cni-k8s</a>
评论 #15849208 未加载