TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: It's 2016, how do you do your automated zero downtime DB migrations?

10 pointsby raimille1over 9 years ago

3 comments

umutover 9 years ago
Sometimes zero downtime migrations are not worth the effort. In most of the cases, you end up thinking of a 3-state picture (BEFORE / DURING / AFTER) and the delta between DURING and AFTER needs some manual work. As long as you don't drop data or end up with irreversible changes, you manage the process and complete the migration just fine.
orionblastarover 9 years ago
When I did DB migrations, I first tested out my code on a test server that had a copy of the production database so see how well it would go and if there were any problems.<p>I would test out all code on a test server first before putting it on a production server.<p>Migrations were usually scheduled at Midnight and finished before anyone came into work. Before the Migration a backup of all data was done.<p>It is the only safe way to do things.
评论 #11136387 未加载
bobfover 9 years ago
Some aspects of zero downtime DB migrations require your changes and deployment protocols to follow certain rules, so I&#x27;m not sure a generic automated solution is possible. I was writing and speaking a fair amount about this back in 2012 and the concepts are still basically the same today -- I&#x27;ll outline the core ideas here and provide a few links with more configuration examples, etc. at the end.<p>The basic idea behind zero downtime DB migrations is to use a Blue-Green deployment model. Blue-Green deployment uses two identical production environments, called &quot;Blue&quot; and &quot;Green&quot;, with a router or proxy setup in front of them. Only one of the environments serves traffic, while the other is idle.<p>For our example, let&#x27;s say Blue is initially live. When you deploy new application code, it would be deployed to Green and tested. Once you&#x27;re happy with it, you switch the router or proxy to send requests to Green. Blue becomes idle, and then you update Blue.<p>With MySQL as the database, I like to call this setup &quot;Active-Passive&quot;, using a Master-Master infrastructure. Normally, Master-Master implies an &quot;Active-Active&quot; setup with each instance being able accept reads and writes. (People that have used Active-Active Master-Master MySQL recognize the big downside: conflicting writes break replication.)<p>Using an Active-Passive approach for supporting zero downtime schema changes lets you avoid the problem of conflicting writes break replication. Basically, think of Active-Passive as the ideas used in Blue-Green deployment, extended to your database.<p>The basic process is:<p><pre><code> - Run STOP SLAVE on both the Active and Passive servers. - Run SQL for the schema change on the Passive server. - Run START SLAVE on the Active server. - Wait for replication lag on the Active server to become small enough (ideally about a second). You can check replication lag with SHOW SLAVE STATUS &quot;Seconds_Behind_Master&quot;, although that isn&#x27;t 100% reliable and you are better off with something like Percona&#x27;s MySQL Toolkit&#x27;s pt-heartbeat. - Run LOCK TABLES on the Active server for the final replication catchup - Ensure replication lag is zero on the Active server - Modify your proxy configuration to change the Active&#x2F;Passive designations - Unlock the new Passive server - Run START Slave on the new Active server </code></pre> Now, there are some limitations:<p><pre><code> The new schema must be backwards compatible with the previous schema: - Add new columns with triggers rather than modifying in place. - New columns cannot be required immediately, or old writes will not replicate appropriately. - You can&#x27;t use server-generated data functions (UUID, NOW, RAND, etc) And it can&#x27;t conflict with pending writes: - No auto-increment INSERT unless the application doesn&#x27;t insert to that table. - No DROP COLUMN nor DELETE rows if they are used in the previous schema version. </code></pre> Resources:<p>My article in SysAdvent on Zero Downtime MySQL schema changes: <a href="http:&#x2F;&#x2F;sysadvent.blogspot.com&#x2F;2012&#x2F;12&#x2F;day-3-zero-downtime-mysql-schema-changes.html" rel="nofollow">http:&#x2F;&#x2F;sysadvent.blogspot.com&#x2F;2012&#x2F;12&#x2F;day-3-zero-downtime-my...</a><p>Slides from a presentation I gave at a Devops conference in 2012: <a href="http:&#x2F;&#x2F;www.completefusion.com&#x2F;zero-downtime-deployments-with-mysql&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.completefusion.com&#x2F;zero-downtime-deployments-with...</a><p>Pivotal Labs about Blue-Green deployment: <a href="https:&#x2F;&#x2F;docs.pivotal.io&#x2F;pivotalcf&#x2F;devguide&#x2F;deploy-apps&#x2F;blue-green.html" rel="nofollow">https:&#x2F;&#x2F;docs.pivotal.io&#x2F;pivotalcf&#x2F;devguide&#x2F;deploy-apps&#x2F;blue-...</a><p>(As a side note, if you haven&#x27;t heard of it before, SysAdvent is awesome: ~25 sysadmin&#x2F;devops posts published annually in December.)
评论 #11135162 未加载