TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Riding the Tiger: Lessons Learned Implementing Istio

34 点作者 zwischenzug大约 5 年前

7 条评论

joekrill大约 5 年前
This jives pretty well with my (admittedly little) experience with Istio. It&#x27;s certainly frustrating at times, but I still find the documentation is actually pretty decent.<p>It&#x27;s just that everything seems to lead down a rabbit hole. But this, I think, is just a Kubernetes thing in general. I had the same experience they did with a monitoring stack. But that&#x27;s because you have to ramp up on some many additional technologies (Prometheus, Grafana, Kiali, etc...). And not just ramp up on their usage, but how they work, interact, and are configured. For Prometheus, for example, they suggest using a federated setup, which adds additional complexeties.<p>I messed around with strict mTLS for quite some time before simply giving up - it just wasn&#x27;t worth the time sink.<p>But in general I agree with the conclusions. Most things are pretty straight-forward and the documentation has really good examples (using their &quot;Bookinfo&quot; project). It&#x27;s just the &quot;going off the beaten path&quot; thing they describe when things become difficult.
评论 #23079795 未加载
pcj-github大约 5 年前
Can completely relate to this article... Have been up and down rabbit holes trying to find an ingress controller that works well as an edge proxy for gRPC streaming services (tried nginx-ingress, contour, esp, istio, ambassador). Very challenging to get the configuration right (of which I have not found yet).
arrayjumper大约 5 年前
This is very close to our experience of working with Kubernetes and istio for over an year. You get so many things seemingly for free that when things work as expected the whole thing is actually quite nice.<p>It is when things don&#x27;t work as expected that it&#x27;s really hard to find help. We had this issue happen a couple of weeks ago where we were trying to connect from one of our clusters to an Elasticache instance. We could connect to it from a namespace with istio sidecars disabled but not from a namespace with it enabled. We could also connect to it from a namespace in a different cluster which did have istio enabled. It took nearly a week to figure that out because there is so little prior art (in terms of stackoverflow questions, github issues etc). This comes with the territory of using something relatively new I suppose.
pluies大约 5 年前
This post certainly rings a bell! Even with deep Kubernetes experience, we struggled at times with Istio at $previous_job, especially around:<p>- Control-plane performance on a cluster with a large-ish number of pods (thousand+) can be hit-and-miss, and &quot;what to scale up&quot; was hard to pinpoint (though admittedly it seems to be getting better)<p>- Istio upgrades often are a pain, but mostly around the actual way of deploying the upgrade, rather than the upgrade itself. For a long time there was no official Helm chart, then there was a Helm chart, then two Helm charts, now it looks like Helm is deprecated and will be removed; instead installing via `istioctl` is recommended... Some of it is due to the pain of upgrading CRDs, which is a general Kubernetes issue, but there&#x27;s still a _lot_ of churn to keep up with.<p>- Adding a new VirtualService registered to the same hostname as an existing one will be accepted by Istio (at least as of 1.4), and will proceed to _silently break all routing for new pods joining your istio cluster_! This was a bear to debug too, given how noisy and confusing the Pilot logs are, and we ended up stitching up a custom Prometheus alert around this given it bit us roughly every other week<p>- HA for control-plane components isn&#x27;t explained in the docs. Is it safe to run two Citadel pods? We did it and it seemed fine, but who knows?<p>- We sometimes ran into pathological cases where traffic would for some reason completely drop after a new deploy, and gradually pick up after the config was stremed onto sidecars, over a span of ~10-15 minutes. We never managed to debug this issue (which happened probably half a dozen times over a year), and that mere fact turned me off the complexity of Istio in general.<p>(Some of these might have been fixed in Istio 1.5+, as 1.4 was my latest experience)<p>Of course, once your setup is stable, everything is awesome: sidecar injection works flawlessy, observability is awesome, distributed tracing is a breeze, Kiali is a great crowd-pleaser when showing off features, mTLS + TLS origination mean full on-the-wire encryption without losing any of the previous benefits... A lot of features that meant we carried on with it, but if I had to start again I&#x27;d probably have a good hard look at Linkerd before recommending Istio for any prod setup.
评论 #23086786 未加载
jordanbeiber大约 5 年前
Envoy is where the magic happens.<p>For those interested in a quick spin of envoy - take a look at consul connect for something a bit more concise.<p>Although not for k8s, the end goal is pretty much the same.
评论 #23082491 未加载
评论 #23080256 未加载
DevopsQuestions大约 5 年前
The titular allusion: <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Ride-Tiger-Survival-Manual-Aristocrats&#x2F;dp&#x2F;0892811250" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Ride-Tiger-Survival-Manual-Aristocrat...</a>
acd大约 5 年前
I tend to prefer simpler ingress controllers like Traefik.