Ingress is a big disaster and is probably the first thing people switching to Kubernetes encounter.<p>The large underlying problem is that the Ingress controller is the place where people need to do a lot of very important things, and the API doesn't specify a compatible way to do those things. Even something as simple as routing ingress:/api/v1/(.*)$ to a backend api-server-v1:/($1) isn't specified. Nginx has its own proprietary way to do it. Traefik has its own proprietary way to do it. Every reverse proxy has a way to do this, because it's a common demand. But to do this in Kubernetes, you will have to hope that there is some magical annotation that does it for you (different between every Ingress controller, so you can never switch), or come up with some workaround.<p>Composing route tables is another problem (which order do the routing rules get evaulated in), and Ingress again punts. Some controllers pick date-of-admission on the Ingress resource, meaning that you'll never be able to recreate your configuration again. (Do you store resource application date in your gitops repo? Didn't think so.) Some controllers don't even define an order! The API truly fails at even medium complexity operations. (It's good, I guess, for deploying hello-app in minikube. But everything is good at running hello-app on your workstation.)<p>Then there are deeper features that are simply not implemented, and seriously hurt the ecosystem in general. One big feature that apps need is authentication and authorization handled at the ingress controller level. If that was reliable, then apps wouldn't have to bundle Dex or roll their own non-single-sign-on. Cluster administrators are forced to configure that every time, and users are forced to sign in 100 times a day. But the promise of containerization was that you'd never have to worry about that again -- the environment would provide crucial services like authentication and the developer just had to worry about writing their app to that API. The result, of course, is a lot of horrifying workarounds (pay a billion dollars a month to Auth0 and use oauth-proxy, etc.). (I wrote my own at my last job and boy was it wonderful. I'm verrrrry slowly writing an open-source successor, but I'm not even going to link it because it's in such an early stage. Meanwhile, I continue to suffer from not having this every single day.)<p>It's not just auth; it's really all cross-cutting concerns. Every ingress controller handles metrics differently. ingress-nginx has some basic prometheus metrics and can start Zipkin traces. Ambassador and Istio can do more (statsd, opencensus, opentracing plugins), but only with their own configuration layer on top of raw Envoy configuration (and you often have to build your own container to get the drivers). The result is that something that's pretty easy to do is nearly impossible for all but the most dedicated users. The promise of containerization basically failed, if you really look hard enough you'll see that you're no better off than nginx sitting in front of your PHP app. At least you can edit nginx.conf in that situation.<p>My personal opinion is to not use it. I just use an Envoy front proxy and an xDS server that listens to the Kubernetes API server to setup backends (github.com/jrockway/ekglue). Adding the backends to the configuration automatically saves a lot of configuration toil, but I still write the route table manually so it can do exactly what I want. It doesn't have to be this way, but it is. So many people are turned off of Kubernetes because the first thing they have to do is find an Ingress controller. In the best case, they decide they don't need one. In the worst case, they end up locked into a proprietary hell. It makes me very sad.