TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Comparison of Kubernetes Ingress Controllers

103 pointsby etxmover 4 years ago

19 comments

csunbirdover 4 years ago
Skipper ingress controller is missing!<p><a href="https:&#x2F;&#x2F;github.com&#x2F;zalando&#x2F;skipper" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;zalando&#x2F;skipper</a><p><a href="https:&#x2F;&#x2F;opensource.zalando.com&#x2F;skipper&#x2F;data-clients&#x2F;kubernetes&#x2F;" rel="nofollow">https:&#x2F;&#x2F;opensource.zalando.com&#x2F;skipper&#x2F;data-clients&#x2F;kubernet...</a>
评论 #24867540 未加载
frompdxover 4 years ago
This is definitely useful but I&#x27;m not sure it is up to date&#x2F;complete. For example, the spreadsheet says ingress-nginx does not have authentication support, but according to the docs it has support for basic, client cert, external basic, and external oauth. But this info is easy to miss because it is hidden in the &quot;examples&quot; section of the docs.<p><a href="https:&#x2F;&#x2F;kubernetes.github.io&#x2F;ingress-nginx&#x2F;examples&#x2F;auth&#x2F;basic&#x2F;" rel="nofollow">https:&#x2F;&#x2F;kubernetes.github.io&#x2F;ingress-nginx&#x2F;examples&#x2F;auth&#x2F;bas...</a>
评论 #24860275 未加载
评论 #24859792 未加载
gwittelover 4 years ago
This is a nice summary. It would be nice if it included deployment model like daemonset vs replicaset.<p>With the plethora of options, what missing for me is what ones perform well under heavy loads. Its painful to find out after the fact since each ingress controller deploys in differing (and sometimes not compatible) ways.<p>For example, with nginx-ingress there are gotchas under heavy load. nginx-ingress doesn&#x27;t support SSL session caching on the upstream (nginx&lt;-&gt;your pod). This is a deficiency in the lua-balancer implementation. You can tune keep-alive requests on the upstream, but it isn&#x27;t always enough. That 50% CPU savings from SSL resumption is costly to lose at times.<p>This has bitten me when a client side connection burst requires a connection burst in nginx&lt;-&gt;service. Upstream services then burn a lot of CPU negotiating SSL, to the detriment of request processing. This then causes more nginx connections to open up due to slower request processing; and might cause healthchecks to fail. There just aren&#x27;t enough tuning parameters to control how hard nginx hits your upstream PODs.
adamgordonbellover 4 years ago
Ingress in K8S is too complex. We need some sane defaults. It seems like there are more decisions to be made upfront then are really necessary.<p>We are setting up our cluster and we ended up going with Traefik and I just published an interview with our architect where he explained why he choose Traefik. Excuse the plug, but its here:<p><a href="https:&#x2F;&#x2F;blog.earthly.dev&#x2F;building-on-kubernetes-ingress&#x2F;#kubernetes-ingest-strategies-" rel="nofollow">https:&#x2F;&#x2F;blog.earthly.dev&#x2F;building-on-kubernetes-ingress&#x2F;#kub...</a><p>The short version is that he finds it easier to setup than Nginx. I think learning curves are an important metric that must be considered as well.
评论 #24870599 未加载
评论 #24862403 未加载
jwineingerover 4 years ago
Istio is reported as having rate limiting, but they deprecated support for that in 1.5 (and 1.7 is current). Now they tell you to do it yourself via envoy directly. <a href="https:&#x2F;&#x2F;istio.io&#x2F;latest&#x2F;docs&#x2F;tasks&#x2F;policy-enforcement&#x2F;" rel="nofollow">https:&#x2F;&#x2F;istio.io&#x2F;latest&#x2F;docs&#x2F;tasks&#x2F;policy-enforcement&#x2F;</a>
评论 #24896569 未加载
sandGorgonover 4 years ago
This is excellent. One thing is missing - support for Proxy Protocol.<p>It is the only real, standards compliant way in which you can preserve client information (IP address, etc) while its moving inside a kubernetes cluster. Not all ingresses have support for injecting it. Most ingresses can read it (assuming a cloud based load balancer has inserted it already).<p>We had moved to haproxy for this reason.
评论 #24862727 未加载
评论 #24860061 未加载
quaffapintover 4 years ago
Ingress Controllers in K8S can and should be simplified. For most cases you&#x27;ll need a controller, but setting one up can be daunting with the various options and having most of them under constant change with varying documentation.<p>This is why things like K3s will bundle Traefik with it to save you the pain, but really this should be the standard. It should be swappable (like it is in K3s), but come with something already available.<p>Updated - Thanks for the reminder - it&#x27;s ingress controller, not ingress.
评论 #24859166 未加载
评论 #24859781 未加载
评论 #24859102 未加载
grafelicover 4 years ago
HAProxy has a dashboard the last time I checked, which is today.<p>Thanks for the effort! A very nice overview, which makes choosing between load balancer implementations when looking for a specific feature a lot easier. Somehow tables like these are hard to find when you actually need them, good to know this one exists.
foskover 4 years ago
I am seeing more and more users wanting to implement end-to-end connectivity from GW to service mesh. Particularly when it comes to Kong, we have done all the heavy lifting in Kong[1] + Kuma[2] (the latter a CNCF project) to do that.<p>Typically we want to create a service mesh overlay across our applications and their services - to secure and observe the underlying service traffic - and still expose a subset of those via an API GW (and via an Ingress Controller) at the edge, to either mobile applications or an ecosystem of partners (where a sidecar pattern model is not feasible).<p>With Kuma and its &quot;gateway&quot; data plane proxy mode, this can be easily achieved via the Kong Ingress Controller, which is mentioned in this spreadsheet.<p>Disclaimer: I am a maintainer of both Kong and Kuma.<p>[1] - <a href="https:&#x2F;&#x2F;github.com&#x2F;Kong&#x2F;kong" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Kong&#x2F;kong</a><p>[2] - <a href="https:&#x2F;&#x2F;github.com&#x2F;kumahq&#x2F;kuma" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kumahq&#x2F;kuma</a>
评论 #24861760 未加载
jrockwayover 4 years ago
Ingress is a big disaster and is probably the first thing people switching to Kubernetes encounter.<p>The large underlying problem is that the Ingress controller is the place where people need to do a lot of very important things, and the API doesn&#x27;t specify a compatible way to do those things. Even something as simple as routing ingress:&#x2F;api&#x2F;v1&#x2F;(.*)$ to a backend api-server-v1:&#x2F;($1) isn&#x27;t specified. Nginx has its own proprietary way to do it. Traefik has its own proprietary way to do it. Every reverse proxy has a way to do this, because it&#x27;s a common demand. But to do this in Kubernetes, you will have to hope that there is some magical annotation that does it for you (different between every Ingress controller, so you can never switch), or come up with some workaround.<p>Composing route tables is another problem (which order do the routing rules get evaulated in), and Ingress again punts. Some controllers pick date-of-admission on the Ingress resource, meaning that you&#x27;ll never be able to recreate your configuration again. (Do you store resource application date in your gitops repo? Didn&#x27;t think so.) Some controllers don&#x27;t even define an order! The API truly fails at even medium complexity operations. (It&#x27;s good, I guess, for deploying hello-app in minikube. But everything is good at running hello-app on your workstation.)<p>Then there are deeper features that are simply not implemented, and seriously hurt the ecosystem in general. One big feature that apps need is authentication and authorization handled at the ingress controller level. If that was reliable, then apps wouldn&#x27;t have to bundle Dex or roll their own non-single-sign-on. Cluster administrators are forced to configure that every time, and users are forced to sign in 100 times a day. But the promise of containerization was that you&#x27;d never have to worry about that again -- the environment would provide crucial services like authentication and the developer just had to worry about writing their app to that API. The result, of course, is a lot of horrifying workarounds (pay a billion dollars a month to Auth0 and use oauth-proxy, etc.). (I wrote my own at my last job and boy was it wonderful. I&#x27;m verrrrry slowly writing an open-source successor, but I&#x27;m not even going to link it because it&#x27;s in such an early stage. Meanwhile, I continue to suffer from not having this every single day.)<p>It&#x27;s not just auth; it&#x27;s really all cross-cutting concerns. Every ingress controller handles metrics differently. ingress-nginx has some basic prometheus metrics and can start Zipkin traces. Ambassador and Istio can do more (statsd, opencensus, opentracing plugins), but only with their own configuration layer on top of raw Envoy configuration (and you often have to build your own container to get the drivers). The result is that something that&#x27;s pretty easy to do is nearly impossible for all but the most dedicated users. The promise of containerization basically failed, if you really look hard enough you&#x27;ll see that you&#x27;re no better off than nginx sitting in front of your PHP app. At least you can edit nginx.conf in that situation.<p>My personal opinion is to not use it. I just use an Envoy front proxy and an xDS server that listens to the Kubernetes API server to setup backends (github.com&#x2F;jrockway&#x2F;ekglue). Adding the backends to the configuration automatically saves a lot of configuration toil, but I still write the route table manually so it can do exactly what I want. It doesn&#x27;t have to be this way, but it is. So many people are turned off of Kubernetes because the first thing they have to do is find an Ingress controller. In the best case, they decide they don&#x27;t need one. In the worst case, they end up locked into a proprietary hell. It makes me very sad.
评论 #24870710 未加载
tomphooleryover 4 years ago
I really liked Traefik when I used it last. It seemed straightforward to use in both docker-compose and Kubernetes environments, allowing me to mess around with settings locally before I deployed to the cluster.
TuringNYCover 4 years ago
The document has (section 12) &quot;Developer Portal&quot; which is good, but I&#x27;d suggest to make this a more prominent item, perhaps even break it up into &quot;Documentation&quot;, &quot;Examples&quot;, Primary Support Channel (Github&#x2F;SO), etc.<p>I recently tried every single K8s IC, one by one, painfully. The biggest challenge was documentation, even something as simple as an example was missing for many of them. They would have examples for one use case, but not per use case. It was incredibly frustrating.
fulafelover 4 years ago
What would it look like to use just normal IP addressing and DNS instead of proxies, NAT and ambiguous rfc1918 addresses? Have a bunch of public API endpoints exposed in DNS and rotate new ones in&#x2F;out at new names and addresses. Then a fallback proxy for v4 clients switching off the Host header.<p>Proxies seem to add a lot of complexity and indirection (not to mention inefficiency).
评论 #24860775 未加载
kingnothingover 4 years ago
What&#x27;s the difference between a check in a green box vs a check in a blue box?
评论 #24862291 未加载
cuillevel3over 4 years ago
Traefik supports letsencrypt and DNS updates (via lego I think).
renke1over 4 years ago
I am not an K8S expert, but why is it that (most?) cloud load balancers not act as ingress controllers?
评论 #24858876 未加载
评论 #24859086 未加载
评论 #24858935 未加载
评论 #24859001 未加载
评论 #24859101 未加载
captn3m0over 4 years ago
What&#x27;s the difference b&#x2F;w a green and a blue tick?
评论 #24865950 未加载
soulmaniqbalover 4 years ago
This is a great resource! Thanks for sharing it!
geuisover 4 years ago
Does this need to be a google sheet? Makes it really difficult to read on mobile.
评论 #24861449 未加载