TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How to do rolling deployments without Kubernetes?

18 pointsby jsdevtomalmost 3 years ago
Every month, there seems to be a new post stating ~"don't use Kubernetes yet", but what should we use instead if we cannot have downtime?

12 comments

capablewebalmost 3 years ago
Instead of talking about specific technologies&#x2F;software&#x2F;services, I&#x27;ll give a quick rundown how you can achieve this in theory, and hopefully it can apply to whatever you&#x27;re currently using.<p>The requirements:<p>- Your application should be able to run on arbitrary ports, preferably controlled with env vars or similar<p>- You need to have something in front of your application, like nginx or apache<p>- Whatever webserver you have should be able to &quot;hang&quot;&#x2F;pause&#x2F;suspend requests while you switch application version<p>- Each version you create needs to be concerned about what the previous version did. Breaking changes needs to happen across multiple versions, where you can soft-deprecate something, and a release after that, actually &quot;break&quot; it<p>- You need to have some sort of &quot;healthchecking&quot; that can tell you if the new version is OK or not<p>The implementation:<p>- You have Version 1 running of your backend on port X<p>- You want to deploy the new version, so you deploy it to your server but the web server in front still serves requests from Version 1<p>- Run healthchecks against Version 2<p>- Once they pass, tell web server to &quot;pause&quot; in-flight requests<p>- Switch web server configuration to use Version 2 application instead (this can be combined with the previous step, `nginx reload` would combine these for example)<p>- Stop Version 1 from running<p>- Repeat for each new version<p>And now you&#x27;ve achieved deploying a new version of your application without any failing requests.<p>(Sidenote: You can replace hanging&#x2F;pausing requests with graceful shutdown of your webserver (meaning it waits for no pending requests) if you have a low amount of traffic)
评论 #31799489 未加载
评论 #31799453 未加载
EnKopVandalmost 3 years ago
I don’t know much about kubernetes (I suspect I may soon since our current process annoys me) and I’m not a DevOps engineer (but we don’t have any, and any consultants we’ve tried is basically just a little more confident using the official documentation than we are).<p>Anyway, how we’ve done it is by setting up a pipeline, that builds and deploys a docker container of a repository to a deployment a lot on a cloud application&#x2F;function&#x2F;whatever you call them if it passes the build phase. Then depending on whether or not the deployment passes the criteria set up for it, it either automatically swaps deployment slots with the production slot or waits for manual confirmation to do so. On anything that is allowed a reload, we don’t swap deployment slots but instead redeploy, typically with minimal downtime.<p>The reason it’s annoying is because it takes a lot of time to set up for each pipeline, and when you need to be able to swap deployment slots with now downtime you also need to handle things like build processes taking global variables or clients needing to be told to reload parts or them.<p>I’m not sure kubernetes is the answer for us, but we’re certainly going to look for a way to make the whole process smarter as it sometimes takes up more time to setup the pipeline and deployment environment than building the service that needs to be deployed.
comprevalmost 3 years ago
Zero-downtime deployments have existed years (perhaps decades?) before k8s was released to the public.<p>Controlling the upstream traffic via haproxy&#x2F;nginx allowed Ops to roll out blue-green, canary and waterfall&#x2F;rolling deployment methods albeit with more human interaction.<p>Digital Ocean have this [0] article from 2014 on load balancing.<p>[0] <a href="https:&#x2F;&#x2F;www.digitalocean.com&#x2F;community&#x2F;tutorials&#x2F;an-introduction-to-haproxy-and-load-balancing-concepts" rel="nofollow">https:&#x2F;&#x2F;www.digitalocean.com&#x2F;community&#x2F;tutorials&#x2F;an-introduc...</a>
jkaalmost 3 years ago
This article provides one possible approach (and includes links to an example implementation): <a href="https:&#x2F;&#x2F;blog.jakubholy.net&#x2F;2013&#x2F;09&#x2F;05&#x2F;blue-green-deployment-without-breaking-sessions-with-haproxy-and-jetty&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.jakubholy.net&#x2F;2013&#x2F;09&#x2F;05&#x2F;blue-green-deployment-...</a>
atmosxalmost 3 years ago
Assuming we&#x27;re talking over-HTTP traffic, you can use a load balancer in-front of your Kubernetes cluster.<p>We did this at work and works fine (blogpost will be out soon). It allows us to decommission a cluster by removing the cluster from the LB. There&#x27;s nothing new about this technique.<p>We&#x27;re using DOKS + CloudFlare Traffic but you can use any LB service (no affiliation, as all products there are pro&#x27;s and con&#x27;s).<p>Once the setup is ready, operations are easy:<p>a) Remove cluster from the LB<p>b) Perform cluster operations (ingress upgrade, k8s upgrade, possibly disruptive daemonset operations)<p>c) Add cluster to LB<p>Another pro is that when a region has cloud-provider level issues (happened with FRA1 few days ago) we can remove the cluster from the LB and stop worrying about it until the the issue is fixed. LBs have health-checks and such, to automate addition&#x2F;remove of clusters.
futheyalmost 3 years ago
Dokku does this.<p>You either deploy from command line or something like a GitHub action (or CI).<p>Container 1 and 2 are briefly running concurrently, and networking is updated to point to the new container once it finishes its build phase, and passes any pre-defined tests (or after 10 seconds of running without crashing).
init-asalmost 3 years ago
Ecs does this automatically <a href="https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;AmazonECS&#x2F;latest&#x2F;developerguide&#x2F;deployment-type-ecs.html" rel="nofollow">https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;AmazonECS&#x2F;latest&#x2F;developerguide&#x2F;...</a>
toast0almost 3 years ago
If you want no downtime server updates, you have choices:<p>a) remove server(s) from load balancing, finish requests&#x2F;connections in progress, restart with new software, add to load balancing<p>b) hot loading<p>c) start a new server instance and pass it the listen socket (or if your OS isn&#x27;t great, drop syns for a bit while you close the listen socket on the old server and open a new listen socket for the new server)<p>I like hotloading, but it&#x27;s not appropriate for all updates, so you need to have a way to handle restart based updates as well.
smackeyackyalmost 3 years ago
Use container registries and automatic pull deployment. AWS fargate, Azure App Services both allow this method of working.<p>I.e. github action to do your build, test. Dont do the docker push to the repository unless it passes the tests. Always do your docker push from the correct branch. If you need approval for release, build that into the pipeline.<p>Azure pipelines pretty much the same deal.<p>You don&#x27;t need kubernetes for a few services, it only gets useful when you have a lot.
mhoadalmost 3 years ago
I use Cloud Run which has all of this built into it by default.<p><a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;products&#x2F;serverless&#x2F;cloud-run-now-supports-gradual-rollouts-and-rollbacks" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;products&#x2F;serverless&#x2F;cloud-run-...</a>
technologicalalmost 3 years ago
HashiCorp Nomad , it is not complex as Kubernetes. Been using it to manage 1000&#x27;s of servers
aristofunalmost 3 years ago
Docker Swarm