We've been deploying Kubernetes for clients since, well, 1.0 (very recently) and have nothing but great things to say about it. If you want something approximating a Heroku-like experience but in your own environment (AWS, GKE, or even on-prem) K8s is a super awesome way to get there. Sure, like anything it's got some rough edges that you'll get cut on, but it improves every 3 months. :D<p>Big kudos to the k8ts team at Goog, and all the other contributors!
> Kubernetes does not offer a clean solution for a number of problems you might face, such as stateful applications.<p>petset is a solution for stateful applications [1]. It's still in alpha though. I heard rumours that it will enter beta in about 3 months.<p>1. <a href="http://kubernetes.io/docs/user-guide/petset/" rel="nofollow">http://kubernetes.io/docs/user-guide/petset/</a>
> In general, Elastic Beanstalk works fine and has a very gentle learning curve; it didn’t take long for all teams to start using it for their projects.<p>This is one of the most pressing issues I tend to evaluate on infrastructures. I'm curious to see a few months down the line your opinion on how the dev teams embraced Kubernetes' setups independently or if they kept depending on a dev-ops team to do so.
I've been using ecs-cli for my container deployments on AWS <a href="https://github.com/kaihendry/count" rel="nofollow">https://github.com/kaihendry/count</a><p>I wonder if I should try kubernetes. It seems a lot more complex but the tooling looks better maintained.
I'd be very interested to see the code for the AWS Lambda functions mentioned—specifically the one about ephemeral development environments based on open PRs. We're building something similar at InQuicker and it'd be great to see how other people are approaching it.
> First, we created one DNS record for each service (each initially pointing to the legacy deployment in Elastic Beanstalk) and made sure that all services referenced each other via this DNS. Then, it was just a matter of changing those DNS records to point the corresponding Kubernetes-managed load balancers.<p>If anyone could explain this, does this mean - the services still are being accessed from Public IP or does Kubernetes managed load balancers are Private IPs that can the individual nodes know about ?
I am very close to giving "k8s" a try.<p>Despite a lot of work on trying to figure out how to get us up on docker for CI workloads - the ecosystem is very confusing in terms of docker-cloud, docker-compose vs. docker-machine (on linux vs. osx) etc.