This guide makes an interesting choice with regards to etcd security, which I'm not sure I'd go with.<p>etcd stores a load of sensitive cluster information, so unauthorised access to it is a bad thing.<p>There's an assumption in the guide that you have a "secure network" and therefore don't have to worry about etcd authentication/encryption. The thing is if you have a compromised container (say) and that container, which has an in-cluster IP address can see your etcd server, then it can easily dump the etcd database and get access to the information held in it...<p>Personally I'd recommend setting up a small CA for etcd and using it's authentication features, there's a good guide to this on the CoreOS site <a href="https://coreos.com/etcd/docs/latest/op-guide/security.html" rel="nofollow">https://coreos.com/etcd/docs/latest/op-guide/security.html</a>
The second question:<p>> Choosing a cloud provider<p>This really annoys me about Kubernetes. Essentially <i>all</i> the official documentation is about how to select a cloud and let a cloud-specific tool magically do everything for you. There's no procedure for setting up a single host for development purposes or to have a Dokku-like personal PaaS.<p>This guide is super useful because it avoids all the magic and lets you set things up properly (despite assuming you're doing it on a cloud) and potentially even do it on a single host.
Why are you doing all of this stuff manually? There are several providers that will set all of this stuff up automatically for you. I like the Kismatic toolkit (<a href="https://github.com/apprenda/kismatic" rel="nofollow">https://github.com/apprenda/kismatic</a>), but there are a bunch of others. Sure, maybe once you go to production you'll want to install manually so that you have everything finely tuned the way you want, but learn it by using it rather than trying to have to figure things up front.<p>Or even better just use GKE for development / learning purposes. Just stop the cluster when you're not using it, and it'll be a lot cheaper than something you won't want to take down because you spent days installing it.
Great set of resources -- I just went through the process of defining a terraform cluster in AWS over the past few weeks, though I'm leveraging the k8s Saltbase installer for the master and nodes.<p>I'm curious, why no mention of AWS as a provider for roll-your-own? Is this a cost thing?<p>Also, I get the feeling that Ubuntu is _not_ a first class citizen of the k8s ecosystem, but perhaps my newness to the ecosystem is to blame here. The Saltbase installer, for example, only supports Debian and RHEL distros, `kops` prefers Debian, and the documentation for cluster deployments on kubernetes.io and elsewhere also seems to be somewhat suggestive of Debian and Core OS. Perhaps thats just a mistaken interpretation on my part. I'm curious what other peoples thoughts on this topic are!
I'm surprised a hobbyist K8s administrator is not choosing to use kubeadm instead.<p><a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a>
I found gluster-kubernetes quite simple to install. But the install instructions do assume that you're going to be giving it it's own partition, which you would be doing on any sort of real production deployment.