Is it me, or does it seem weird to encrypt your secrets by uploading the secret key to GCP (contained in the config .yaml file)? I assume the controller instances are operated by Google in this[1] example.<p>Moreover, is there any sensible way at all to encrypt secrets without baking the secret key into your image? I can’t think of any.<p>I want to deploy an app that makes use of one or more fairly important secrets, but I haven’t found a sensible way to make it auto-scale while keeping the secrets on-premise.<p>As far as I can see, the only sensible solution is to create in-cloud/off-premise secret keys that can only be accessed by images signed with an on-premise secret key.<p>So,<p>1. Create secret key on an offline, on-premise machine<p>2. Produce application image, transfer to offline machine, sign with on-premise secret key<p>3. Create off-premise (in-cloud) secret, which can only be accessed by images signed with the on-premise secret key<p>4. Upload app image and signature to the cloud, allowing only this image access to the in-cloud secret<p>[1] <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/1.7.4/docs/06-data-encryption-keys.md" rel="nofollow">https://github.com/kelseyhightower/kubernetes-the-hard-way/b...</a>
I can highly recommend OpenShift and its Ansible deployment scripts. Documentation is very well-written and complete.<p>It takes care of all the annoying parts of Kubernetes and even has services like a full-featured Docker registry with ACLs and so on, a Docker build system and even a centralized logging mechanism (all optional, of course).<p>Running it in production. Couldn't be happier.
Going through the previous version of this tutorial really helped me, even though we're doing IBM Cloud private on-prem + Bluemix Container Service (don't ask.)<p>It works pretty well with Cloud Shell, in case you have corporate firewall issues. If your session is interrupted, run the commands that set the region and region zone again.<p>I can confirm that it costs about $6/day that the machines are provisioned, and is well worth it, but remember to run all the clean-up steps in the last chapter when you're done or if you're not going to finish it right away.
This is great. I have been looking at Kubernetes for sometime and have struggled with adapting it to our deployment model. A lot of the tools and tutorials want someone to sit and run commands in order to start controllers and worker nodes but that doesn't make sense in our automated environment. What we really want is a way to bake AMIs etc that have everything ready to go and when we do a deployment or scale out it is as simple as starting an instance. This collection of labs lays a lot of that out and I think this is something we can work with.
I did something similar off of CoreOS's tutorial. So while I'm missing a lot of understanding of the newer functionality, going through this was worth it.