TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Managing database schema changes without downtime

267 pointsby jbaviatabout 7 years ago

13 comments

drinchevabout 7 years ago
Having no-down-time when doing migrations is not an easy topic.<p>&gt; These defer drops will happen at least 30 minutes after the particular migration referenced ran (in the next migration cycle), giving us peace of mind that the new application code is in place.<p>Pretty much that&#x27;s what I&#x27;ve seen before. You do series of deployments + migration scripts.<p>Usually :<p>- Migrate with backward-compatible modifications of the schema<p>- Release application logic that takes advantage of both migrated &#x2F; to-be-deprecated tables<p>- Possible extra migration to sync the new &#x2F; old tables.<p>- Release application logic that stops using the to-be-deprecated tables.<p>- Migrate with destructive modifications to remove the old table.<p>Usually that&#x27;s a huge complicated process which might be replaced with one migration and a few minutes of downtime. Sadly some companies can&#x27;t afford it, so they do such changes with weeks planing.<p>I usually vote for having some planned non-working-hours downtime.
评论 #16668774 未加载
评论 #16666394 未加载
评论 #16666431 未加载
评论 #16670450 未加载
评论 #16672058 未加载
评论 #16667752 未加载
评论 #16672042 未加载
YorickPeterseabout 7 years ago
At GitLab we do the following:<p>We have two migration directories: db&#x2F;migrate and db&#x2F;post_migrate. If a migration adds something (e.g. a table or a column) or migrates data that existing code can deal with (e.g. populating a new table) then it goes in db&#x2F;migrate. If a migration removes or updates something that first requires a code deploy, it goes in db&#x2F;post_migrate. We combine this with a variety of helpers in various places to allow for zero downtime upgrades (if you&#x27;re using PostgreSQL). For example, to remove a column we take these steps:<p>1. Deploy a new version of the code that ignores the column we will drop (this is a matter of adding `ignore_column :column_name` in the right class).<p>2. Run a post-deployment migration that removes said column.<p>3. In the next release we can remove the `ignore_column` line (we require that users upgrade at most one minor version for online upgrades).<p>The migrations in db&#x2F;post_migrate are executed by default but you can opt-out by setting an environment variable. In case of GitLab.com this results in the following deployment procedure:<p>1. Deploy code with this variable set (so we don&#x27;t run the post-deployment migrations)<p>2. Re-run `rake db:migrate` on a particular host without setting this environment variable.<p>For big data migrations we use Sidekiq to run them in the background. This removes the need for a deployment procedure taking hours, though it comes with some additional complexity.<p>More information about this can be found at the following places:<p>1. <a href="https:&#x2F;&#x2F;docs.gitlab.com&#x2F;ee&#x2F;update&#x2F;README.html#upgrading-without-downtime" rel="nofollow">https:&#x2F;&#x2F;docs.gitlab.com&#x2F;ee&#x2F;update&#x2F;README.html#upgrading-with...</a><p>2. <a href="https:&#x2F;&#x2F;docs.gitlab.com&#x2F;ee&#x2F;development&#x2F;what_requires_downtime.html#doc-nav" rel="nofollow">https:&#x2F;&#x2F;docs.gitlab.com&#x2F;ee&#x2F;development&#x2F;what_requires_downtim...</a><p>3. <a href="https:&#x2F;&#x2F;docs.gitlab.com&#x2F;ee&#x2F;development&#x2F;background_migrations.html" rel="nofollow">https:&#x2F;&#x2F;docs.gitlab.com&#x2F;ee&#x2F;development&#x2F;background_migrations...</a>
评论 #16670342 未加载
michaeldejongabout 7 years ago
Researcher&#x2F;author of a tool [0,1] also attempting to tackle this problem here.<p>Unfortunately zero-downtime schema changes are even more complex than suggested here. Although the expand-contract method as described in the post is a good approach to tackling this problem, the mere act of altering a database table that is in active use is a dangerous one. I&#x27;ve already found that some trivial operations such as adding a new column to an existing table can block database clients from reading from that table through full table locks for the duration of the schema operation [2].<p>In many cases it&#x27;s safer to create a new table, copy data over from the old table to the new table, and switch clients over. However this introduces a whole new set of problems: keeping data in sync between tables, &quot;fixing&quot; foreign key constraints, etc.<p>If there are others researching&#x2F;building tooling for this problem, I&#x27;d love to hear from you.<p>[0] <a href="http:&#x2F;&#x2F;github.com&#x2F;quantumdb&#x2F;quantumdb" rel="nofollow">http:&#x2F;&#x2F;github.com&#x2F;quantumdb&#x2F;quantumdb</a><p>[1] <a href="https:&#x2F;&#x2F;speakerdeck.com&#x2F;michaeldejong&#x2F;icse-17-zero-downtime-sql-database-schema-evolution-for-continuous-deployment-1" rel="nofollow">https:&#x2F;&#x2F;speakerdeck.com&#x2F;michaeldejong&#x2F;icse-17-zero-downtime-...</a><p>[2] <a href="http:&#x2F;&#x2F;blog.minicom.nl&#x2F;blog&#x2F;2015&#x2F;04&#x2F;03&#x2F;revisiting-profiling-ddl-statements-mysqls-return&#x2F;" rel="nofollow">http:&#x2F;&#x2F;blog.minicom.nl&#x2F;blog&#x2F;2015&#x2F;04&#x2F;03&#x2F;revisiting-profiling-...</a>
评论 #16666435 未加载
评论 #16666720 未加载
评论 #16667515 未加载
评论 #16666295 未加载
smoyerabout 7 years ago
We&#x27;re using FlywayDB [0] and many of the ideas in Martin Fowler&#x27;s _Evolutionary DB_ article [1]. When database changes are breaking, we use a pre-deploy and post-deploy &quot;pair&quot; to create an intermediate DB state that both application versions will run against.<p>[0] <a href="https:&#x2F;&#x2F;flywaydb.org&#x2F;documentation&#x2F;migrations" rel="nofollow">https:&#x2F;&#x2F;flywaydb.org&#x2F;documentation&#x2F;migrations</a><p>[1] <a href="https:&#x2F;&#x2F;www.martinfowler.com&#x2F;articles&#x2F;evodb.html" rel="nofollow">https:&#x2F;&#x2F;www.martinfowler.com&#x2F;articles&#x2F;evodb.html</a>
评论 #16666440 未加载
danbrucabout 7 years ago
There is also anchor modeling [1] with quite a few publications [2] and even an online modelling tool [3] to play around with. It is essentially a method that yields a highly normalized bitemporal database model so that every prior database state, schema and data, is a subset of the current database state.<p>I was personally not able to use it yet but I really like the idea behind it. It is probably not the ideal choice in general due to its append only nature ever increasing the consumed storage space and due to its high degree of normalization which may or may not have a negative impact on performance when implemented on top of a rational database depending on the nature of your queries.<p>But if you do not have to deal with excessive amounts of data, if you expect a lot of schema evolution, if you need traceability of all changes for auditing purposes or such, then using anchor modeling might be worth a shoot.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Anchor_modeling" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Anchor_modeling</a><p>[2] <a href="http:&#x2F;&#x2F;www.anchormodeling.com&#x2F;?page_id=360" rel="nofollow">http:&#x2F;&#x2F;www.anchormodeling.com&#x2F;?page_id=360</a><p>[3] <a href="http:&#x2F;&#x2F;www.anchormodeling.com&#x2F;modeler&#x2F;latest&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.anchormodeling.com&#x2F;modeler&#x2F;latest&#x2F;</a>
pjungwirabout 7 years ago
This looks like a helpful approach to a tricky topic! I&#x27;m curious how the post-deploy migrations work when you run all your migrations at once, from the beginning, e.g. in Circle CI or when onboarding a new developer?<p>Also, do you do anything to protect yourself against migrations written at time <i>t_0</i> and then run much later at time <i>t_n</i>? I&#x27;ve seen a lot of problems there when migrations use application code (e.g. ActiveRecord models), and then that code changes. (My solution is to never call model code from migrations, but there are other ways.) This isn&#x27;t really specific to your article I guess, but does your approach making managing that harder? Easier?<p>Does having that details table help at all when you have migrations on a long-lived branch that is merged after other migrations have been added? Or is the solution there just to rebase the branch and test before merging it in?<p>EDIT: I think this blog post by the Google SRE folks is great reading for people thinking about migrations and deployment:<p><a href="https:&#x2F;&#x2F;cloudplatform.googleblog.com&#x2F;2017&#x2F;03&#x2F;reliable-releases-and-rollbacks-CRE-life-lessons.html" rel="nofollow">https:&#x2F;&#x2F;cloudplatform.googleblog.com&#x2F;2017&#x2F;03&#x2F;reliable-releas...</a><p>I&#x27;ve never been comfortable with rolling back migrations in production, but their plan changed my mind that it can be done safely. Is your approach compatible with theirs?
jerkstateabout 7 years ago
This is cool but I&#x27;ve started reducing my need for migrations by just serializing most of the structured pieces of the record into a text string (json for example), and only have separate columns for things that need to be joined (usually an id column) or selected on&#x2F;indexed. I find this approach prevents me from needing to do migrations nearly as often. If you find that you really do need to index on a new field, adding a column is the easiest kind of migration to do.<p>Of course sometimes you do still need to which is where strategies like this come in.
theptipabout 7 years ago
&gt;Migrate database to new schema... spin up new instance<p>Sadly this is more of a pain in Django, since it does not write default values to the database. To be able to safely run the migration first, one must manually rewrite the SQL for the migration. [1]<p>[1]: <a href="http:&#x2F;&#x2F;pankrat.github.io&#x2F;2015&#x2F;django-migrations-without-downtimes&#x2F;" rel="nofollow">http:&#x2F;&#x2F;pankrat.github.io&#x2F;2015&#x2F;django-migrations-without-down...</a>
stuff4benabout 7 years ago
Sigh, wishing there was some magic sauce for this. But currently will have to sit through a 14 HOUR downtime while my team upgrades our SonarQube DB to 6.7.
zieabout 7 years ago
Just put your database schema in a VCS and get on with life: <a href="http:&#x2F;&#x2F;www.liquibase.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.liquibase.org&#x2F;</a>
PunchTornadoabout 7 years ago
is it really 0 downtime as the title suggest, or minimal downtime?
评论 #16667309 未加载
xstartupabout 7 years ago
I wonder how it is done in Dynamodb or Mongo.
lndolentabout 7 years ago
I’d love to see an article about letting your users fucking deal with it, because no one really needs this much uptime anymore.
评论 #16667654 未加载
评论 #16667854 未加载