I've been using Kubernetes on Azure and GCE recently and it's absolutely wonderful.<p>I was able to setup an entire ecosystem from scratch in a week that scales well and can be managed in one location.<p>When I first looked at Kubernetes, the complicated part was setting it up on a cluster. If you use Kubernetes on GCE, or Azure, you don't have to do that step, everything else is ready to go for you!<p>- Automatic scaling of your application<p>- Service discovery<p>- Secrets and config management<p>- Logging in one central dashboard<p>- Able to deploy various complicated, distributed, pieces of software very easily using Helm (Jenkins, Kafka, Grafana + Prometheus)<p>- Able to add new nodes to the cluster easily<p>- Health checks and automatic restarts<p>- Able to deploy any container to your cluster in a really simple way (if you look at Deployments, it's really simple.)<p>- Switch between cloud providers and still maintain the same workflow.<p>I won't ever touch Ansible again, I really prefer the Kubernetes way of handling operations (it's like a live organism instead of something you apply changes to.)<p>Also, the entire argument that you probably don't need Kubernetes because your organization doesn't have 10s, or 100s of nodes just doesn't make sense after using it.<p>Having a Kubernetes cluster with 3 nodes is 100% worth it, even for rather simple applications in my opinion. The benefits are just way too good.
EDIT: I should preface my little rant by saying that this post is one of the best I've seen at explaining the basic concepts of Kubernetes. But obviously I'm not an expert :)<p>> The Kubernetes API should now be available at <a href="http://localhost:8001" rel="nofollow">http://localhost:8001</a>, and the dashboard at this rather complicated URL. It used to be reachable at <a href="http://localhost:8001/ui" rel="nofollow">http://localhost:8001/ui</a>, but this has been changed due to what I gather are security reasons.<p>I was playing around with GCE Hosted Kubernetes about a year ago, and things were pretty clear as far as I recall. I've read lots of positive things, and figured it's a good way to start.<p>Then I tried again recently, and I couldn't even get to the dashboard. Eventually after several cryptic StackOverflow copy&pastes I managed to load it (don't even remember how), only for the session to expire after 10 minutes or so... It was utterly frustrating. I didn't actually get to the more interesting part I was planning to play with as a result...<p>People say that there's a learning curve, and I get it. And also I'm not even trying to install Kubernetes on my own, but try to use a hosted service. I'm also pretty switched on when it comes to security and trying new things (or I'd like to think I am), but there are some things that feel like too much of an obstacle for me unfortunately.
Scott McCloud, author of the great book Understanding Comics, also did a comic book introduction to Kubernetes:<p><a href="https://cloud.google.com/kubernetes-engine/kubernetes-comic/" rel="nofollow">https://cloud.google.com/kubernetes-engine/kubernetes-comic/</a>
I gave up on my own Kubernetes writeup a while back. I just had a lot of trouble with basic networking configuration, logging, etc.<p>I've been at one shop with a large scale DC/OS installation. You can run a k8s scheduler on DC/OS, but by default it uses Marathon. DC/OS has it's own problems for sure, and both tools require a full time team of at least 3 people (we had 8~10) and there are a lot of things that will probably need to be customized for your shop (which labels to use, scripts to setup your ingress/egress points in AWS, HAproxy configuration or marathon-lb configuration .. which is just a haproxy container/wrapper), but I think I still prefer marathon.<p>I briefly played with nomad and which I had spent more time with it. I know people from at least one startup around where I live using it in production. It seems to be a bit more minimal and potentially more sane.<p>The thing I hate about all of these is there is no 1 to n scaling. For a simple project, I can't just setup one node with a minimal scheduler. DC/OS is going to cost you ~$120 a month for one non-redundant node:<p><a href="https://penguindreams.org/blog/installing-mesosphere-dcos-on-small-digital-ocean-droplets/" rel="nofollow">https://penguindreams.org/blog/installing-mesosphere-dcos-on...</a><p>I hear people talk about minicube, but that's not something you can expand from one node to 100 right? You still have to build out a real k8s cluster at some point. All of these tools are just frontends around a scheduling and container engine (typically Docker and VMs) that track which containers are running where and track networking between nodes (and you often still have to chose and configure that networking layer .. weavenet, flannel, etc).<p>I know someone will probably mention Rancer, and I should probably look at it again, but last time I looked I felt it was all point-n-click GUI and not enough command line flags (or at least not enough documented CLI) to really be used in an infrastructure as code fashion.<p>I feel like there's still a big missing piece of the docker ecosystem, a really simple scheduler that can easily be stood up on new nodes and attach them to an existing cluster, and has a simply way of handling public IPs for web apps/haproxy containers. I know you can do this with K8s, DC/OS, etc. But there is a lot of prep work that has to be done first.
I really want to like Kubernetes, but going beyond the basics seems to require a way higher understanding of systems engineering than I currently have. Yes I know you can create container networks and stateful pods with attached storage, but how is always seemingly beyond me. Network and storage in distributed computing is hard and Kubernetes seems to be a slightly more magical bullet than Docker Swarm alone.
We use kubernetes to spin up the application that I work on (in private cloud and at some point in hybrid and public cloud) deployments. It’s an end user installed tool. In deployment, about 1/4 of the new installations fail because of some problem or another. Either the GPU plugins for NVIDIA weren’t loaded correctly, kube-dns won’t start because docker0 isn’t in a “trusted” zone in redhat (not being in trusted seems to cause iptables to subtly screw up container to container communication between the various private networks), or helm just decides that it can’t start.<p>Are we doing it wrong?<p>We’re using hyperkube and k8s 1.8 which came out around q4 of last year.<p>Almost all of these I can trace back to user error (ie we told folks to do X, they didn’t, and stuff broke). We’re now having to write a preflight checklist of sorts that the app runs through to make sure A bunch of stuff is “ok.” That in itself becomes brittle in my experience so I’m reluctant to do that.
We are working on a project with standard LXC containers [1] which tries to make orchestration and some of this stuff especially networking simpler.<p>We support provisioning servers, building overlay networks with Vxlan, BGP & Wireguard, distributed storage and rolling out things like service discovery, load balancers and HA.<p>It may be worth exploring for those struggling with some of the complexity around container deployments. At the minimum it will help you understand more about containers, networking and orchestration.<p>[1] <a href="https://www.flockport.com" rel="nofollow">https://www.flockport.com</a>
I wish I could find a tutorial for bare metal:
- how to setup cert creation for fqdn with Cloudflare
- storage
- ingress to support multiple ips glued to different nodes (so if service gets IP x it gets routed through node z that has this external IP)
I spent 6 months trying to do that and no luck.
What I find the hardest to figure out is how to properly deploy databases in kubernetes. What kind of volumes should I use and how do I configure them for production instead of some hello world situation?
What is the typical use case for clustered applications? What size organization really needs it? I understand that a simple nginx static site can accept thousands of
simultaneous connections per host. That sounds pretty huge to me. If you were to sell kubernetes based solutions who would you consider selling to? What makes kubernetes fundamentally superior to docker swarm?
If you are on AWS, try ECS first, it is much simpler and has main features you need: HA, autoscaling, version control.<p>When you'll mature from using Docker to high volume production, you should ditch containers at all, they are good for prototyping and testing, but not for production loads and production security.