TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

eBPF will help solve service mesh by getting rid of sidecars

237 点作者 tgraf超过 3 年前

11 条评论

dijit超过 3 年前
Honestly after I learned that the majority of Kubernetes nodes just proxy traffic between each other using iptables and that a load balancer can&#x27;t tell the nodes apart (ones where your app lives vs ones that will proxy connection to your app) I got really worried about any kind of persistent connection in k8s land.<p>Since some number of persistent connections will get force terminated on scale down or node replacement events...<p>Cilium and eBPF looks like a pretty good solution to this though since you can then advertise your pods directly on the network and load balance those instead of every node.
评论 #29498139 未加载
评论 #29499175 未加载
评论 #29498132 未加载
评论 #29500374 未加载
zdw超过 3 年前
So instead of making the applications use a good RPC library, we&#x27;re going to shove more crap into the kernel? No thanks, from a security context and complexity perspective.<p>Per <a href="https:&#x2F;&#x2F;blog.dave.tf&#x2F;post&#x2F;new-kubernetes&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.dave.tf&#x2F;post&#x2F;new-kubernetes&#x2F;</a> , the way that this was solved in Borg was:<p>&gt; &quot;Borg solves that complexity by fiat, decreeing that Thou Shalt Use Our Client Libraries For Everything, so there’s an obvious point at which to plug in arbitrarily fancy service discovery and load-balancing. &quot;<p>Which seems like a better solution, if requiring some reengineering of apps.
评论 #29498723 未加载
评论 #29504376 未加载
评论 #29500070 未加载
评论 #29498355 未加载
评论 #29498557 未加载
评论 #29498362 未加载
评论 #29500502 未加载
评论 #29498343 未加载
评论 #29499776 未加载
评论 #29500386 未加载
评论 #29505889 未加载
评论 #29498353 未加载
评论 #29502340 未加载
评论 #29498962 未加载
codetrotter超过 3 年前
&gt; Identity-based Security: Relying on network identifiers to achieve security is no longer sufficient, both the sending and receiving services must be able to authenticate each other based on identities instead of a network identifier.<p>Kinda semi-offtopic but I am curious to know if anyone has used identity part of a WireGuard setup for this purpose.<p>So say you have a bunch of machines all connected in a WireGuard VPN. And then instead of your application knowing host names or IP addresses as the primary identifier of other nodes, your application refers to other nodes by their WireGuard public key?<p>I use WireGuard but haven’t tried anything like that. Don’t know if it would be possible or sensible. Just thinking and wondering.
评论 #29498685 未加载
评论 #29498661 未加载
评论 #29500015 未加载
评论 #29498551 未加载
评论 #29506004 未加载
Matthias247超过 3 年前
I understand how BPF works for transparently steering TCP connections. But the article mentions gRPC - which means HTTP2. How can the BPF module be a replacement for a proxy here. My understanding is it would need to understand http2 framing and having buffers - which all sound like capabilities that require more than BPF?<p>Are they implementing a http2 capable proxy in native kernel C code and making APIs to that accessible via bpf?
评论 #29500133 未加载
xmodem超过 3 年前
Doing this with eBPF is definitely an improvement, but when I look at some of the sidecars we run in production, I often wonder why we can&#x27;t just... integrate them into the application.
评论 #29498682 未加载
评论 #29499997 未加载
评论 #29498629 未加载
评论 #29498791 未加载
unmole超过 3 年前
Offtopic: I really like the style of the diagrams. I remember seeing something similar elsewhere. Are this manually drawn or is this the result of some tool?
评论 #29497917 未加载
zinclozenge超过 3 年前
It&#x27;s not clear how eBPF will deal with mTLS. I actually asked that when interviewing at a company using eBPF for observability into Kubernetes the answer was they didn&#x27;t know.<p>Yea, if you&#x27;re getting TLS termination at the load balancer prior to k8s ingress then it&#x27;s pretty nice.
评论 #29502498 未加载
评论 #29502478 未加载
manvendrasingh超过 3 年前
I am wondering how would this solve the problem of mTLS while still supporting service level identities? Is it possible to move the mTLS to listeners instead of sidecar or some other mechanism?
davewritescode超过 3 年前
From a resource perspective this makes sense but from a security perspective this drives me a little bit crazy. Sidecars aren&#x27;t just for managing traffic, they&#x27;re also a good way to automate managing the security context of the pod itself.<p>The current security model in Istio delivers a pod specific SPIFFE cert to only that pod and pod identity is conveyed via that certificate.<p>That feels like a whole bunch of eggs in 1 basket.
评论 #29499985 未加载
outside1234超过 3 年前
There is a good talk about this (and more) from KubeCon:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=KY5qujcujfI" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=KY5qujcujfI</a>
ko27超过 3 年前
Not convinced that this a better solution then just implementing these features as part of the protocol. For example, most languages have libraries that support grpc load balancing.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;grpc&#x2F;proposal&#x2F;blob&#x2F;master&#x2F;A27-xds-global-load-balancing.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;grpc&#x2F;proposal&#x2F;blob&#x2F;master&#x2F;A27-xds-global-...</a>