TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Docker, Kubernetes, Openshift, etc – how do you deploy your products?

135 点作者 BloodKnight9923超过 8 年前
I use docker extensively with python backed ansible scripts to manage my product deployments (with a jenkins CI&#x2F;CD pipeline). That has been a lot of fun, but I have also played with both Kubernetes and Openshift.<p>I love what Openshift Origin can do, but the learning curve is like a brick wall (See Dwarf Fortress Fun for an example) and the costs are far from minimal.<p>Kubernetes is easier to learn, but comes with its own gotchas.<p>What do you do to maintain stable deployments that allow for easy CI&#x2F;CD? How do you minimize costs with your solution?

54 条评论

zalmoxes超过 8 年前
I recently(past 6 months) joined a new startup as the operations person, and we standardized on kubernetes for deployment. In the past I&#x27;ve worked with puppet&#x2F;chef&#x2F;ansible&#x2F;heroku&#x2F;aws&#x2F;appengine&#x2F;vmware you name it, and Kubernetes is the nicest and most flexible platform to build on top.<p>There&#x27;s a learning curve, and new features are being added, but at this point I would not hesitate to recommend Kubernetes to just about anyone.<p>CI: We standardize on CircleCI and it gets the job done, but has some serious shortcomings. I&#x27;ve also come close to building my own on top of the k8s cluster and it&#x27;s not the correct time investment for me right now, but I&#x27;d consider building my own in the future. I&#x27;ve yet to find a CI framework I really like.
评论 #13542673 未加载
评论 #13542145 未加载
评论 #13541839 未加载
评论 #13542262 未加载
评论 #13542356 未加载
评论 #13542946 未加载
评论 #13544429 未加载
eicnix超过 8 年前
Openshift is essentially Kubernetes + Redhat Extensions + Redhat Support<p>I use Gitlab CI and helm[1] for deploying. The last step of the ci process checks out the helm chart which is just another git repo and executes a helm install&#x2F;upgrade CHART-NAME. Making things accessible is done through kubernetes ingress with nginx[2](which includes getting let&#x27;s encrypt automatically for all external endpoints) so when I want to deploy a new staging version of the app I can do helm install app --set host=my-stage.domain.com .<p>There still a few gotchas like the pods won&#x27;t update when a configmap was changed which is important because I keep the container configuration maps as configmaps. A crude workarround for this is [3] which triggers a configuration reload of the application running inside the container.<p>This solution has no licensing cost unlike Openshift(Tectonic[4] is another enterprise Kubernetes distribution which is free for 10 nodes) and the cost are based on the amount of time to set this up. But after you got into helm and more complex kubernetes deployments it should be easy.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;kubernetes&#x2F;helm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kubernetes&#x2F;helm</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;jetstack&#x2F;kube-lego" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jetstack&#x2F;kube-lego</a><p>[3] <a href="https:&#x2F;&#x2F;github.com&#x2F;jimmidyson&#x2F;configmap-reload" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jimmidyson&#x2F;configmap-reload</a><p>[4] <a href="https:&#x2F;&#x2F;coreos.com&#x2F;tectonic&#x2F;" rel="nofollow">https:&#x2F;&#x2F;coreos.com&#x2F;tectonic&#x2F;</a>
评论 #13542131 未加载
评论 #13543403 未加载
riceo100超过 8 年前
I used to use Marathon on Mesos for deploying Docker containers, and orchestrated it via a hacked together Jenkins cluster, which worked well but took a lot of configuration and was somewhat brittle.<p>I moved to Kubernetes about 6 months ago and have been really enjoying it. My first production cluster was hand rolled on AWS, where I found the cloud-provider load balancer integrations extremely helpful (<a href="https:&#x2F;&#x2F;kubernetes.io&#x2F;docs&#x2F;user-guide&#x2F;load-balancer&#x2F;" rel="nofollow">https:&#x2F;&#x2F;kubernetes.io&#x2F;docs&#x2F;user-guide&#x2F;load-balancer&#x2F;</a>).<p>I&#x27;m now using Google Container Engine which is effectively just a hosted Kubernetes cluster on GCP, which has really been 0 effort setup, and have been deploying to it with Wercker (<a href="http:&#x2F;&#x2F;www.wercker.com" rel="nofollow">http:&#x2F;&#x2F;www.wercker.com</a>) [Disclaimer: I currently work at Wercker as of the last few months, but was a fan&#x2F;user for many years before joining]<p>One thing I noticed across Openshift, Mesos, and Kubernetes: none of them handle the Docker daemon on a node hanging particularly well, which in my experience happens fairly often.
评论 #13543629 未加载
jhspaybar超过 8 年前
I use Convox (<a href="http:&#x2F;&#x2F;www.convox.com" rel="nofollow">http:&#x2F;&#x2F;www.convox.com</a>). It is backed by ECS which gets me out of the infrastructure game for the most part and the CLI interactions in Convox are similar to heroku style commands so the learning curve is much simpler than deploying and learning my own Kubernetes or OpenStack or ECS configurations. They&#x27;ve also thought of the other things you need like environment based secrets(uses DynamoDB and KMS behind the scenes), as well as external load balancing, TLS, RDS integrations and more with single simple commands.<p>They also have CI&#x2F;CD out of the box and builds can be triggered in your existing cluster with a &#x27;convox build&#x27; or triggered on pushes to your private github repos.<p>Overall, unless you have a team that actually sees benefit in managing your own container and cluster manager(you better be big), id recommend embracing Convox, or something like it. The complexity still exposed by Kubernetes, OpenStack or ECS is still significant.
评论 #13542502 未加载
评论 #13542419 未加载
评论 #13542319 未加载
ams6110超过 8 年前
I deploy on bare metal. Docker, Kubernetes, et. al add layers of complexity that I don&#x27;t need. I&#x27;m not saying that they don&#x27;t have benefits at a certain scale, but for the types of single-server deployments I do, I have not been convinced.
评论 #13543743 未加载
评论 #13543058 未加载
评论 #13543107 未加载
评论 #13543363 未加载
webo超过 8 年前
Our team is &lt;15 engineers. The set up is roughly as below. We have around 40 services. Ping me if you wanna talk more.<p><a href="https:&#x2F;&#x2F;cloudcraft.co&#x2F;view&#x2F;5582ddd4-c6f8-4354-8f5b-9fb0a374412a?key=NkuLpYphuk30fWbXYgIWwQ" rel="nofollow">https:&#x2F;&#x2F;cloudcraft.co&#x2F;view&#x2F;5582ddd4-c6f8-4354-8f5b-9fb0a3744...</a><p>* Development: docker + docker-compose. Ideally, we would want to get rid of docker-compose for development.<p>* CI: Travis (planning on switching to something that is more on the CD side)<p>* Infrastructure management: terraform<p>* Prod: AWS, CoreOs, Kubernetes 1 master node and 5-6 worker nodes (m4.large) in an autoscaling group.<p>Infrastructure deployments and updates are done by Terraform. Blue&#x2F;Green deployments thanks to the autoscaling group.<p>Kubernetes deployments and updates are done by kubectl.<p>There&#x27;s still problems with each piece, but for the most part they work great without much trouble.
评论 #13542537 未加载
评论 #13544197 未加载
ownagefool超过 8 年前
We mainly use drone and have built a templating tool that wraps around kubernetes deployments to give us feedback on whether they were successful or not.<p>Example kube-deploy files: <a href="https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;kube-piwik" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;kube-piwik</a><p>Example app &#x2F; drone files: <a href="https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;docker-piwik" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;docker-piwik</a><p>Platform Documentation: <a href="https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;hosting-platform" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;hosting-platform</a><p>KD - our deployment tool <a href="https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;kd" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;kd</a><p>I can&#x27;t really comment on whether or not this specific pipeline actually works as I&#x27;ve just picked a random open source example but the workflow is there.<p>We also have a legacy tool and use jenkins sometimes, but mostly that won&#x27;t be open sourced.<p>Legacy deployment tool - don&#x27;t use this. <a href="https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;kb8or" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;UKHomeOffice&#x2F;kb8or</a>
jcahill84超过 8 年前
At Schezzle (<a href="https:&#x2F;&#x2F;schezzle.com" rel="nofollow">https:&#x2F;&#x2F;schezzle.com</a>) we use docker swarm on AWS.<p>The build jobs creates images that are published to ECS repositories, and there are auto scaling groups that add and remove engine hosts to and from ALB target groups for each deployed service. It makes service discovery, scaling, etc. really easy.<p>Definitely try swarm out if you haven&#x27;t already. 1.12 was good, 1.13 is amazing (secrets, health-based VIP membership, etc).
评论 #13544070 未加载
Svenstaro超过 8 年前
I&#x27;m currently on an all-docker pipeline but I resent it. It&#x27;s slow, tedious and everybody&#x27;s trying to use docker against its design (everybody tries to make images with as few layers as possible, I think docker should just do away with the layers altogether). It also makes it harder than it should be to make an image that works both for local development and deployment at the same time. Also, docker-compose is riddled with fairly old but important bugs (for instance, Dockerignore files are ignored by docker-compose&#x27;s build).<p>I&#x27;d much prefer doing simple bare-metal deployments again.
评论 #13541914 未加载
评论 #13542445 未加载
MexicanMonkey超过 8 年前
<a href="https:&#x2F;&#x2F;cloud.docker.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cloud.docker.com&#x2F;</a><p>Surprised it wasn&#x27;t already in the long list of suggestions. Have been using the tool when it was called Tutum. The guys behind Docker bought Tutum and renamed it to Docker Cloud. It&#x27;s currently set up to redeploy services when I push an image to my repositories. Really loving the simplicity, even tough it&#x27;s got some quirks.<p>You can now link your bitbucket or github repositories. Let it build your containers and deploy it to production. This way you can build an easy CI&#x2F;CD pipeline.
评论 #13543988 未加载
logn超过 8 年前
<a href="http:&#x2F;&#x2F;rancher.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;rancher.com&#x2F;</a> works with minimal fuss. The tools in this space are so much in flux I just care about something working easily and reliably in the short&#x2F;medium term.
oelmekki超过 8 年前
I use dokku for deployment and on-premise gitlab for CI.<p>Dokku&#x27;s main advantage is that it&#x27;s a no-brainer : if you&#x27;re used to deploying heroku apps, it&#x27;s very similar. It also automates the creation of data containers for database services, for example. On top of that, while I can use heroku&#x27;s buildpacks for small sideprojects, I can also take full control of the build using a Dockerfile (which is what I do for bigger projects). The main inconvenient is that it can&#x27;t manage multi host container deployments, like docker-swarm or kubernetes (I don&#x27;t need it, so no need to compromise on simplicity).<p>Gitlab&#x27;s pipeline both offer CI and CD, with a lot of cool features around it, like being able to tell on a commit page when it has been deployed on production, for no configuration cost.<p>Regarding costs : well, it&#x27;s the cost of a dedicated server.
评论 #13563836 未加载
zie超过 8 年前
we use Nomad[0], we pretty much use Hashicorp&#x27;s entire stack (consul, vault and nomad). Vault has been fabulous for secret(s), authentication, etc. Consul for service discovery and Nomad for job running&#x2F;deployment. We have a mix of static binaries that we run and docker containers. Most of our new stuff is all docker containers. We use Jenkins as our CI&#x2F;CD, that just run nomad jobs and confirm their successful deployment.<p>Cost management is easy, all the projects are open-source and since we can spin Nomad up against any cloud provider or internal machine hosts, depending on what&#x27;s the cheapest at the time. It&#x27;s pretty easy to wrap your head around Nomad and make it do what you need.<p>0: <a href="https:&#x2F;&#x2F;www.nomadproject.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.nomadproject.io&#x2F;</a>
timeu超过 8 年前
We are evaluating Openshift Origin on an existing OpenStack on-premise cloud. So far I have been playing around with the oc cluster up deployment on a local workstation and it works fine but I haven&#x27;t played around with the CI&#x2F;CD option (they support jenkins deployments, etc). From the docs I see that there is a bit of complexity regarding the security constraints and integration of volumes that I need to wrap my head around.<p>I also attended the DevConf.cz and saw a lot of presentations regarding Openshift. They have most of the talks on youtube (<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;channel&#x2F;UCmYAQDZIQGm_kPvemBc_qwg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;channel&#x2F;UCmYAQDZIQGm_kPvemBc_qwg</a>) in case somebody is interested
评论 #13542198 未加载
评论 #13550025 未加载
trolla超过 8 年前
I can recommend Rancher. I’ve used Openshift, Kubernetes and Rancher - so far, Rancher has been the best experience.<p><a href="http:&#x2F;&#x2F;rancher.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;rancher.com&#x2F;</a>
评论 #13545531 未加载
评论 #13543633 未加载
评论 #13542453 未加载
评论 #13542331 未加载
backordr超过 8 年前
At the company I work at we use Docker with Kubernetes. The deployment process involves Ansible and Jenkins CI.<p>I, personally, prefer the bare-metal deploys of automated scripts. I usually just spin up a VM and write a bash script to &quot;prep&quot; it the way I want. After that, I just run &quot;.&#x2F;deploy&quot; and it pushes where I want. I like this because I feel like I have more control and it actually feels easier. Plus, I&#x27;ve run into weird issues with Docker that take so long to debug that it completely cancels out the benefit of using it for me.<p>The bash script I have works for every side project I create, and is simply copied from project to project. :)
评论 #13542937 未加载
ksri超过 8 年前
We use AWS elastic beanstalk. It&#x27;s simple to setup a high availability environment. And if needed, you can always access the underlying ec2 instances or elastic load balancer.<p>Jenkins has a plugin that integrates with elastic beanstalk. This makes ci&#x2F;cd straightforward.<p>There&#x27;s no extra cost for elastic beanstalk, other than what you&#x27;d pay for ec2, s3 and elastic load balancer.<p>We&#x27;ve a starter template with a bunch of .ebextensions scripts that simplify common installation tasks.<p>If your application is a run-of-the-mill web app speaking to a database - elastic beanstalk is pretty much all you need.
aruggirello超过 8 年前
Nobody mentioned it, but I&#x27;m using Vagrant and the digital_ocean plug-in to manage local VMs and droplets for my small projects, it&#x27;s a simple, quick and convenient way to bring up fully replicable apps&#x2F;services. I&#x27;m using small scripts to provision my machines with Caddy, PHP7, MySQL, and a few other goodies. Given available droplet sizes, I&#x27;m not hard pressed to scale beyond a single machine per app&#x2F;service, and this keeps everything simple; otherwise I&#x27;d probably go with Kubernetes.
thinkdevcode超过 8 年前
We use Rancher with Cattle and do CI&#x2F;CD via our self-hosted GitLab CI. Pretty easy to setup &amp; maintain. Would definitely recommend taking a look at Rancher if you haven&#x27;t yet.
评论 #13563115 未加载
spudfkc超过 8 年前
At my last job, we started off using Mesos and Marathon, but eventually ended up dropping that in favor of a homemade solution using SaltStack (the manager demanded we drop Mesos&#x2F;Marathon and use Salt - it was pretty shitty).<p>At my current place, we are using Teamcity to run tests and build images, and Rancher for the orchestration part. I built a simple tool to handle auto-deployments to our different environments.<p>I cannot recommend Rancher enough. Especially for small teams, it&#x27;s just a breeze to set up and use.
rickr超过 8 年前
I&#x27;ve been working on creating a platform for a non profit to get veterans coding (<a href="http:&#x2F;&#x2F;operationcode.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;operationcode.org&#x2F;</a>). We&#x27;re a slack based community and have been rolling out some home grown slack bots and we currently have a rails app hosted on heroku. Managing and keeping track of the different apps was getting unwieldy so in an effort to consolidate our apps and reduce costs I evaluated a few different options. I ended up going with rancher and after working with it a bit I&#x27;m pretty happy.<p>I have github hooked up to travis. When a new PR (or commit) is pushed travis shoves the app into its container, and runs the test suite inside the container.<p>If that passes AND the branch is master we push the image to docker hub. As of now we manually update the app inside of rancher but I think automating that will be a simple API call. Once we get more stable I&#x27;ll be investigating that.<p>I still haven&#x27;t quite figured out secret management but outside of that and a tiny learning curve it&#x27;s been pretty smooth sailing.<p>An example travis config: <a href="https:&#x2F;&#x2F;github.com&#x2F;OperationCode&#x2F;operationcode_bot&#x2F;blob&#x2F;master&#x2F;.travis.yml" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;OperationCode&#x2F;operationcode_bot&#x2F;blob&#x2F;mast...</a>
评论 #13544858 未加载
评论 #13543668 未加载
prgk超过 8 年前
Since most of the solutions mentioned here are container based, I will provide something different.<p>Started using Juju[1]. Basically, Juju handles bootstrapping&#x2F;creating instances you need in the public clouds and you can use Juju Charms to specify how to deploy your services. So our deployment looks like this:<p><pre><code> - juju bootstrap google # Get instance in GCE - juju deploy my-app # My app is deployed to GCE </code></pre> You can actually try this with already publicly available apps. Example, You can deploy Wikimedia[2] by just doing:<p><pre><code> juju deploy wiki-simple </code></pre> This will install Wikimedia, MySQL and creates the relationship needed between the Wikimedia and the database.<p>In our case, we have a production and development environments. Both are actually running in clouds in different regions<p><pre><code> - juju bootstrap google&#x2F;us-east1-a production - juju deploy my-app - juju bootstrap google&#x2F;europe-west1-c development - juju deploy my-app </code></pre> In addition to running in different regions, development looks at any changes to development branch in our GitHub repo.<p>We don&#x27;t use any containers. Juju allows us to deploy our services in any clouds (aws, gce, azure, maas ...) including local using lxd.<p>[1] <a href="https:&#x2F;&#x2F;www.ubuntu.com&#x2F;cloud&#x2F;juju" rel="nofollow">https:&#x2F;&#x2F;www.ubuntu.com&#x2F;cloud&#x2F;juju</a><p>[2] <a href="https:&#x2F;&#x2F;jujucharms.com&#x2F;wiki-simple" rel="nofollow">https:&#x2F;&#x2F;jujucharms.com&#x2F;wiki-simple</a>
throwawaytoday1超过 8 年前
We use Dokku (<a href="https:&#x2F;&#x2F;github.com&#x2F;dokku&#x2F;dokku" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;dokku&#x2F;dokku</a>) in production using it&#x27;s tag:deploy feature to manage all of our containers (apps in Dokku). We&#x27;ve fully automated it that we no longer interact directly with the individual instances. Pushes to master kick of builds that create docker hub images, then a deployment is triggered on the production machines.
nawitus超过 8 年前
We use Kontena at the moment.<p><a href="https:&#x2F;&#x2F;kontena.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;kontena.io&#x2F;</a>
评论 #13542317 未加载
评论 #13545286 未加载
errordeveloper超过 8 年前
At Weaveworks, we have a built a tool called Flux [1]. It is able to relate manifests in a git repo to images in container registry. It has a CLI client (for use in CI scripts or from developer&#x27;s workstation), it also has an API server and an in-cluster component, as well as GUI (part of Weave Cloud [2]).<p>Flux is OSS [3], and we use it to deploy our commercial product, Weave Cloud, itself which runs on Kubernetes.<p>1: <a href="https:&#x2F;&#x2F;www.weave.works&#x2F;continuous-delivery-weave-flux" rel="nofollow">https:&#x2F;&#x2F;www.weave.works&#x2F;continuous-delivery-weave-flux</a><p>2: <a href="https:&#x2F;&#x2F;cloud.weave.works" rel="nofollow">https:&#x2F;&#x2F;cloud.weave.works</a><p>3: <a href="https:&#x2F;&#x2F;github.com&#x2F;weaveworks&#x2F;flux" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;weaveworks&#x2F;flux</a>
rdli超过 8 年前
We have a relatively simple cloud app: a couple (micro)services, but we also use Postgres and ElasticSearch. We started using Docker + Spinnaker + k8s, but then we ran into the problem of setting up the app for local dev (where we wanted to use local PG) and prod (where we wanted to use RDS).<p>&lt;plug&gt;we&#x27;ve been working a bit on an open source tool, pib, that supports setting up multiple environments because we ran into this problem (behind the scenes it uses terraform, k8s, and minikube). would love to hear if anyone here has seen anything similar or has thoughts! <a href="https:&#x2F;&#x2F;github.com&#x2F;datawire&#x2F;pib&lt;&#x2F;plug&gt;" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;datawire&#x2F;pib&lt;&#x2F;plug&gt;</a>
morgante超过 8 年前
I&#x27;ve spent a fair amount of time evaluating different solutions through my startup[0] and have found Kubernetes, by far, to come with the least pain. It&#x27;s not hard to get started with, but also works well as you grow and mature. It makes most of the decisions right from the start and kubectl gives you most of the functionality you need to manage deployments easily.<p>Also, while I have a vested interest in saying this, you don&#x27;t always want to solve this yourself. Look at hosted solutions like GCP and CircleCI to make things even more painless.<p>[0] <a href="http:&#x2F;&#x2F;getgandalf.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;getgandalf.com&#x2F;</a>
评论 #13541860 未加载
jkemp超过 8 年前
We selected Kubernetes on AWS but there are a lot of details to go from source code all the way through to automated k8s deployments. We are currently using our own framework (<a href="https:&#x2F;&#x2F;github.com&#x2F;closeio&#x2F;devops&#x2F;tree&#x2F;master&#x2F;scripts&#x2F;k8s-cicd" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;closeio&#x2F;devops&#x2F;tree&#x2F;master&#x2F;scripts&#x2F;k8s-ci...</a>) but I’m keeping an eye on helm&#x2F;chart to see if it makes sense to incorporate that at some point. Pykube (<a href="https:&#x2F;&#x2F;github.com&#x2F;kelproject&#x2F;pykube" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kelproject&#x2F;pykube</a>) has made it easy to automate the k8s depoyment details. We needed a process that would take Python code from our GitHub repos, build and test on CircleCI and then deploy to our k8s clusters.<p>A single commit to our master branch on GitHub can result in multiple service accounts, config maps, services, deployments, etc. to be created&#x2F;updated. Making all of that work is complicated enough but then we also need to deal with things like canary deployments and letting us build and deploy to k8s from our local workstations. And then there are details like automatically deleting old images from ECR so your CICD process doesn’t fill that up without you knowing. Incorporating CICD processes with Kubernetes is kind of new so there is a lot of different projects and services starting to address this area.
luckystartup超过 8 年前
I&#x27;ve worked with a lot of tools. I&#x27;ve decided that I like things that are simple and don&#x27;t cost much money to get started. For new projects I always start with Heroku, or Parse (on a free back4app plan now).<p>I love Ansible. Chef is alright. I&#x27;ve been using AWS OpsWorks recently, and it&#x27;s not bad. Elastic beanstalk is ok, too.<p>I&#x27;ve spun up some Kubernetes clusters, and it&#x27;s nice, although I have no need for it yet. I remember the database situation was difficult when I was trying it last year. Something about persistent storage being difficult, so you had to run Postgres on a separate server.<p>I still like Capistrano. You can automate it with any CI pipeline. For one client, I used the &quot;elbas&quot; [1] gem for autoscaling on AWS. It automatically created new AMIs after deployment. Not super elegant, but it worked fine.<p>I don&#x27;t see much of a middle ground between Heroku and Kubernetes. Just start with the one free dyno. Maybe ramp it up to 3 or 4 with hirefire.io. Once you&#x27;re spending a few hundred per month on Heroku, that&#x27;s probably the time to spin up a small kubernetes cluster and deploy stuff in containers.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;lserman&#x2F;capistrano-elbas" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lserman&#x2F;capistrano-elbas</a>
marcc超过 8 年前
This is a great question and something we&#x27;ve been trying to figure out ourselves. Historically, we were using Ansible to deploy Docker containers to EC2 instances, but have moved some services over to Kubernetes, Swarm and Lambda&#x2F;Serverless. All of these are create the same deployment challenges -- the current products out there don&#x27;t fit perfectly. The more we want to deploy to a higher level than &quot;just Docker&quot;, the less Ansible provides today. But we wanted to stick to the core concepts of automation, continuous delivery (at least to staging), and chatops style management of production.<p>Our current approach is using an Operable (<a href="https:&#x2F;&#x2F;operable.io" rel="nofollow">https:&#x2F;&#x2F;operable.io</a>) Cog we wrote which takes the kubernetes yaml and applies it to a running cluster. It&#x27;s not perfect, but I&#x27;m pretty happy with the direction it&#x27;s going. We built this cog in a public repo (<a href="https:&#x2F;&#x2F;github.com&#x2F;retracedhq&#x2F;k8s-cog" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;retracedhq&#x2F;k8s-cog</a>) so you are welcome to use any of it, if it&#x27;s useful. Then we have our CI service send a message (using SQS) after a build is done to deploy to staging.
评论 #13542250 未加载
评论 #13543296 未加载
kt9超过 8 年前
For Kubernetes, checkout <a href="https:&#x2F;&#x2F;www.distelli.com" rel="nofollow">https:&#x2F;&#x2F;www.distelli.com</a><p>Its a SaaS (and enterprise) platform for automated pipelines and deployments to Kubernetes clusters anywhere.<p>Previous discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13160218" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13160218</a><p>disclaimer: I&#x27;m the founder at distelli.
peu4000超过 8 年前
I work at an established company and most of our apps are still deployed with RPM and puppet.<p>For our dockerized services we use Nomad internally and for a different product we&#x27;ve built in AWS we&#x27;re using Elastic Beanstalk with all of the resources defined in terraform.<p>We use jenkins to manage the CI&#x2F;CD for each method.
bert2002超过 8 年前
<a href="https:&#x2F;&#x2F;mesosphere.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;mesosphere.com&#x2F;</a>
hiphipjorge超过 8 年前
We currently use docker for all our services in AWS and we deploy them with ansible scripts. Services with a single container are fairly straightforward, but for services with multiple containers running, we use the DR CoN patter which works fairly well. Our ansible scripts handle everything from deploying the container, to deploying registrator, to updating the nginx templates, so it&#x27;s fairly automated.<p>For CI, we use our own product (Runnable [0]), which allows us to test our branches with their own full-stack environments, which is great for solid integration tests. We often use it for e2e too. We&#x27;re planning on adding more CD features in the near future though.<p>[0] <a href="http:&#x2F;&#x2F;runnable.com" rel="nofollow">http:&#x2F;&#x2F;runnable.com</a>
olalonde超过 8 年前
We use Deis (<a href="https:&#x2F;&#x2F;deis.com&#x2F;workflow&#x2F;" rel="nofollow">https:&#x2F;&#x2F;deis.com&#x2F;workflow&#x2F;</a>), which is a sort of Heroku on top of Kubernetes. For CI, we use CircleCI and automatically deploy when tests pass on the master branch.
sandGorgon超过 8 年前
Have been working with the kubernetes teams on slack. Kubernetes is definitely building a lot of the right things ground up, but its like Hbase vs Cassandra - the former needs a full time dedicated team to get stuff working.<p>Docker Swarm (especially 1.13 <a href="https:&#x2F;&#x2F;www.infoq.com&#x2F;news&#x2F;2017&#x2F;01&#x2F;docker-1.13" rel="nofollow">https:&#x2F;&#x2F;www.infoq.com&#x2F;news&#x2F;2017&#x2F;01&#x2F;docker-1.13</a>) is like Cassandra for me. Yes it has a few shortcomings, but it allows you to have a fairly reasonable cluster using a stupid compose.yml file and very quickly.
backmail超过 8 年前
I use cloud66 for my sideproject <a href="https:&#x2F;&#x2F;backmail.io" rel="nofollow">https:&#x2F;&#x2F;backmail.io</a> . All the components are dockerized, and deployed&#x2F;managed through cloud66 stack. For a smaller projects&#x2F;teams, cloud66 provide an easier way to get everything working with single click ssl , easy scaling , and provide both vertical and horizontal scaling either using cloud vm&#x27;s or your custom dedi machines. It also supports CI pipeline, to build docker images, though i use my own jenkins setup to build docker images.
EngineerBetter超过 8 年前
We work with folks (very large banks, automotives, governments, manufacturers, retailers) who use Cloud Foundry, often combined with Concourse to deploy both apps and the platform itself.<p>It&#x27;s surprising the number of people who want to build a homebrew Kubes PaaS. When I first started working in development, every company was building its own CMS, until it invariably realised that it was hard and that they were better off using a commercial or open source solution. Seems that container-based platforms are history repeating itself.
suhith超过 8 年前
I&#x27;ve been using Docker, I love it. Hope to weigh the pros and cons of Swarm and Kubernetes and try those out too, but for most of my applications networked Docker containers are sufficient.
jordz超过 8 年前
We&#x27;re heavily invested in Azure and their ARM system (Azure Resource Manager). Our entire infrastructure is code as ARM Templates which we deploy to dev &#x2F; test &#x2F; production. There&#x27;s no discrepancies between environments. Our entire application is then deployed on top. Everything is done through VSTS (Visual Studio Team Services). We&#x27;re very happy with it, very flexible and we have a very stable platform because of it.
old-gregg超过 8 年前
We do this for a living: <a href="http:&#x2F;&#x2F;gravitational.com&#x2F;managed-kubernetes&#x2F;" rel="nofollow">http:&#x2F;&#x2F;gravitational.com&#x2F;managed-kubernetes&#x2F;</a><p>This is Kubernetes, plus monitoring of your choice, running on your infrastructure, remotely managed by our team. The side benefit is that the same setup works on different infrastructure options, so you deploy and run the same stack on AWS and also on-premise&#x2F;bare metal.
energybar超过 8 年前
Has anyone tried docker swarm or Docker datacenter, we&#x27;ve been looking at it but are on the fence vs kubernetes...
ryanbertrand超过 8 年前
I have been using Convox to deploy our Docker containers. It has been great for the past year and is improving daily.
usgroup超过 8 年前
I really liked fleetd, so it&#x27;s sad that it&#x27;s wrapped up. It felt unixy and was small enough to understand. Now I&#x27;m looking toward serverless and total abstraction of the infrastructure. I kind of see the space in between filled by Mesos, Kube and others as a bit ephemeral.
falcolas超过 8 年前
Custom wrapper around Amazon ECS. We need more fine grained control over the instances to support encryption, secret injection, log aggregation, and so forth than other frameworks provide.<p>&gt; How do you minimize costs with your solution?<p>Autoscaling groups triggered off of &quot;cluster capacity&quot;.
deepnotderp超过 8 年前
Docker and nvidia-docker, since it allows pcie passthrough for novideo GPUs.
hosh超过 8 年前
I&#x27;m working for a startup right now. We&#x27;re using Kubernetes via GKE on Google Cloud.<p>Back in 2015, I implemented a Kubernetes by hand in AWS. I&#x27;m not going to do something like that again. GKE is fairly painless and it has most of the sensible defaults that I want. Networking just works -- pods can talk to each other as well as to any VM instances from any availability zone and region. Integrating with GCP service accounts just works. Spinning up experimental clusters is easy, as is horizontally scaling the clusters. One gotcha is that Google has not made K8S 1.5 generally available in all regions or availability zones. Otherwise, upgrades are pretty easy.<p>I have deployed with Docker Compose (not doing that again -- it is easier to use shell scripts). I have deployed with AWS ECS service (not doing that again; it does not have the concept of pods which severely constrains how you deploy). I used to deploy with Chef. I&#x27;ve heard of Chef&#x27;s Habitat, but have not played with it.<p>Back for the 2015 project, I wrote Matsuri as a framework to manage the different Kubernetes templates. It&#x27;s useful if you know Ruby. It uses idiomatic Ruby to generate and manage K8S specifications, and run kubectl commands. I wanted a single tool that could work with all the different environments (production, staging, etc.) as well as manage the dev environment. For example, if I want to diff my version-controlled spec on dev with what Kubernetes master currently has, I would use `bin&#x2F;dev diff pod myapp`. If I want to diff the deployment resource by the same name, I would use `bin&#x2F;production diff deployment myapp`. I can write hooks specific to the app. For example, `bin&#x2F;production console mongodb` uses hooks to query Kubernetes to find a pod to attach to, determine the current Mongodb master, and invoke the command to go directly into the Mongodb shell. But I could have invoked `bin&#x2F;staging console mongodb` or `bin&#x2F;dev console mongodb`. I could do this because I have been developing software for a long time and I have enough ops experience to be able to put it all together. YMMV.<p>We&#x27;re using Go.cd for the CD. I could have used Jenkins, but decided to give Go.cd a try. Go.cd has some advantages (such as much better topologies and tracking value streams) though there are also things it does not do as well as Jenkins (Go.cd auth mechanisms blow, and I had to write my own custom proxy to get Github hooks working more securely and reliably). Setting up GCP service accounts so that go.cd agents can deploy was a lot easier than I thought, once I read through the GCP docs. (Much easier than AWS).<p>Docker containers are still difficult to make. You want to vet things before using them. Handling this stuff is still going to be a full-time job for someone, both in terms of designing the infrastructure as well as the development tools. There are a lot of issues that come up because dev might throw things over the wall that might impact the overall reliability and performance of the system.
评论 #13542516 未加载
评论 #13545474 未加载
phillmv超过 8 年前
A related question: how often are the people here scaling their applications up and down?<p>Do you have large workload spikes, or traffic spikes?
jacques_chester超过 8 年前
At Pivotal we use BOSH[0] almost exclusively for deploying distributed systems. The motivating usecase was Cloud Foundry[1], but it can be used for pretty much anything. Our founding role in both of these is why BOSH is our first choice for such occasions.<p>It has a plugin model (CPIs) for hosting substrates, so right now it can deploy and upgrade systems on AWS, GCP, Azure, vSphere, OpenStack and there are others I forget right now.<p>It&#x27;s proved itself in large production systems for years. Every week or two we entirely upgrade our public Cloud Foundry, PWS, and nobody ever notices.<p>OK, that&#x27;s a lie. You get an email from CloudOps: &quot;We&#x27;re going to deploy v251&quot;. Then a few hours later: &quot;v251 is deployed&quot;. Or occasionally: &quot;Canaries failed, v251 was rolled back&quot;.<p>There&#x27;s nice integration with Concourse[2,3]. You simply &quot;put&quot; your deployment and it just gets deployed for you. Our CloudOps team do this now, which makes their lives that much easier.<p>Versioning is trivial, especially if you&#x27;re working in a commit-deploy model via Concourse.<p>The downside is that BOSH is BOSH.<p>We&#x27;re doing lots of work to make it friendlier and more approachable, but right now it&#x27;s powerful and very opinionated. It does not have a smooth onramp, because the basis of its power and reliability is that it insists on certain minimum conditions first.<p>It&#x27;s really meant for operators, not developers, but at Pivotal the main consumers by volume are developers. Usually to deploy Cloud Foundry and Concourse; though my current assignment is actually going to be shipped purely as a BOSH release.<p>Disclosure: I work for Pivotal on Cloud Foundry.<p>[0] <a href="http:&#x2F;&#x2F;bosh.io&#x2F;" rel="nofollow">http:&#x2F;&#x2F;bosh.io&#x2F;</a><p>[1] <a href="https:&#x2F;&#x2F;docs.cloudfoundry.org&#x2F;deploying&#x2F;common&#x2F;deploy.html" rel="nofollow">https:&#x2F;&#x2F;docs.cloudfoundry.org&#x2F;deploying&#x2F;common&#x2F;deploy.html</a><p>[2] <a href="http:&#x2F;&#x2F;concourse.ci&#x2F;" rel="nofollow">http:&#x2F;&#x2F;concourse.ci&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;github.com&#x2F;concourse&#x2F;bosh-deployment-resource" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;concourse&#x2F;bosh-deployment-resource</a>
AznHisoka超过 8 年前
I just do a &quot;cap production deploy&quot;, and it does everything for me (I use bluepill + god for running background processes too)<p>I don&#x27;t need Docker, and think it&#x27;s too complex. I deploy to over 50+ servers, so don&#x27;t tell me it&#x27;s because I run a simple setup :P
zaargy超过 8 年前
If you&#x27;re on AWS, then you should be using ECS first of all.
mohanmcgeek超过 8 年前
Openshift is a wrapper on top of k8s.<p>You should just use helm.
sslalready超过 8 年前
In a team where Node and Golang were the language of choice, we used GitHub private repos for code, TeamCity as the driver for CI&#x2F;CD and Salt to deploy the Docker images to our different environments running on AWS EC2 instances. I must say I really liked TeamCity and its different integrations with GitHub, build processes (Node&#x2F;NPM, frontend tooling, ..) and how variables could be shared down with project and releases.<p>To deploy code with Salt, we had an SSH account on the Salt server configured with a bunch of deploy keys. Each of those had a forced command that would read <i>$SSH_ORIGINAL_COMMAND</i> and forward this information to an agent (running as root) that would execute Salt with the correct arguments, based on information in <i>$SSH_ORIGINAL_COMMAND</i>. This let us use a build step in TeamCity that basically did <i>ssh deploy@mgmt-gateway [env] [project] [version]</i>. Deployments were logged to New Relic and Slack.<p>In a different team that are fond of PHP we use a private GitLab CE for code management, GitLab CI Multi-runner as the build agent for CI&#x2F;CD, Ansible for configuration management and code deploys to different environments running on AWS EC2. Like in the previous team, we have configured our .gitlab-ci.yml to pass some arguments in <i>$SSH_ORIGINAL_COMMAND</i> over SSH to a management node that in turns talks to Ansible.<p>Something I like with having a private GitLab CE instance is that development doesn&#x27;t stop because your public Git host is DDoSed or have other problems (like the recently discussed one here on HN).<p>Test and staging servers are shutdown&#x2F;destroyed off-hours and restarted&#x2F;recreated by cron jobs that execute Ansible plays which identify eligible EC2 instances via EC2 tags. Production environments with multiple servers are similarly scaled down during off-hours. By simply modifying&#x2F;removing the &quot;shutdown&quot; tag from the AWS resources, teams are able to exclude their test&#x2F;staging environments from the scheduled shutdowns, something which is useful for upcoming releases. ;)<p>In the Node&#x2F;Golang shop I loved how simple Docker images were and how good it felt to deploy it to isolated containers. Unfortunately, I don&#x27;t see how that&#x27;s possible (in a clean way, preferrably without using two images) when both an Nginx process (static file serving, e.g. frontend resources) and a PHP-FPM process needs access to the same code release.<p>(If you have experience with Nginx&#x2F;PHP-FPM apps and Docker, feel free to enlighten me!)<p>Things I&#x27;m not entirely fond of about GitLab CI is that:<p>- each branch in each repo must have a <i>.gitlab-ci.yml</i> that is up-to-date (administrative challenge!)<p>- it&#x27;s entirely driven from a <i>git push</i> (though the web gui provides buttons for <i>existing</i> builds to retry&#x2F;manually execute steps to e.g. deploy code)<p>GitLab has no support for a centrally managed <i>.gitlab-ci.yml</i> file on a project group and&#x2F;or project level. There&#x27;s no way to define variables on a project group and&#x2F;or project level. There&#x27;s no way to schedule jobs so that you can execute daily&#x2F;weekly tests, or to manage jobs (in a user-friendly way via the web gui) that perform cron-like tasks, so you can avoid putting these tasks on the server themselves in &#x2F;etc&#x2F;cron.d (which becomes a problem when you restore backups &#x2F; bake AMIs &#x2F; do auto-scaling).<p>I&#x27;d love to look more into K8 and Google&#x27;s cloud offerings, especially since I believe this might be the future and because I believe Google are lightyears ahead of the competition when it comes to security and protecting the privacy of its customers. Unfortunately I&#x27;m afraid it&#x27;s not viable given my team&#x27;s current investment in Nginx&#x2F;PHP-FPM apps and various AWS services.
评论 #13543658 未加载
BuuQu9hu超过 8 年前
Matador Cloud (<a href="https:&#x2F;&#x2F;matador.cloud&#x2F;" rel="nofollow">https:&#x2F;&#x2F;matador.cloud&#x2F;</a>) uses nixops to manage NixOS machines.