I am convinced that data migration is definitely one of the hardest problems in data management and systems engineering.<p>There are basically no solutions today that satisfy fundamental requirements such as minimizing downtime and guaranteeing correctness.<p>It is _such_ a huge problem that most inexperienced developers see kicking the problem down the line with NoSQL document storage as a viable alternative (it isn't; you'll be either dealing with migrating all data forever and special-casing every old version of your documents, or writing even more convoluted migration logic).<p>It's also clear that even the most modern ORMs and query builders have not been built in mind to consider the issues that arise in migrating data.<p>It would be a refreshing thing to see more research devoted to this problem. Unfortunately, migrations end up being so different from each other with such heterogenous requirements that we'll probably be working on this for a really long time.
This is an amazing writeup. I'm currently solving the "migrations" problem for a side project of my own, and have basically resolved myself in the short term to be OK with short downtime for the sake of making migrations somewhat trivial.<p>And honestly? I hate this answer. As a solo dev it's pragmatic, but the solutions described in this article are _SO NICE_ that I'd love to leverage them.<p>If there's any way that those deprecate_column and rename functionalities could make their way into OSS/upstream support, I'd have a field day. (Those who know more about PG than I do and perhaps may be able to suggest another workaround, feel free, I'm very much learning this space as I go)<p>If nothing else, thanks to the benchling team for taking the time to write such a clear yet technical expose. This really hit the sweet spot of "explanations without uneccessary verbosity, technical without being impenetrable, and giving sufficient explanations of motivations and tradeoffs/pitfalls" and will give me a north star for where I aim my own DB work.
In our startup we moved away from Alembic to using plain SQL files for migrations, which (in our experience) is more robust and allows more control over the actual migration process. We wrote a simple migration manager class that loads a YAML config and a series of SQL files from a directory. Each migration is defined as two files of the form "[number]_[up/down]_[description].sql" and tied to a DB version, the YAML config specifies the name of a version table that contains the current version in the database. The manager then reads the current version from the table, compares it to the requested one and executes the necessary SQL files.<p>Alembic is great for many simple use cases but we found that for a production system it often isn't easy to maintain compatibility between two different DB systems like Postgres and SQLite anyway, as that would mean either adding a lot of feature switches and custom logic to our code or not using most of the interesting native Postgres features. Therefore Alembic offered very little benefit over a plain SQL file in terms of functionality and in addition made it harder to generate correct migrations in some case, as the auto-generation process does not work very reliably in our experience and some things are buggy/inconsistent, e.g. the creation and deletion of enum types. In addition, we found that it's much easier to write complex migration logic (e.g. create a new column and populate it with data from a complex SELECT statement) directly in SQL. Last point is that we can of course execute these migrations using any programming language / tool we like (for example we also wrote a small Go library to handle the migrations), which is a nice bonus.<p>That said we also heavily use SQLAlchemy in our Python backend code and like it a lot.
This covers a lot of ground that I've recently had to learn the hard way.<p>One item I've been considering; under Downtime, a reason for flakes in migrations is "long running transactions".<p>I've seen this too, and wonder if the correct fix is actually to forbid long-running transactions. Typically if the naive long-running transaction does something like:<p><pre><code> with transaction.atomic():
for user in User.objects.all():
user.do_expensive_thing_to_related_objects()
</code></pre>
You can often recast that migration to something more like<p><pre><code> for user in User.objects.all():
with transaction.atomic():
user = User.objects.get(id=user.id) # Read the row to lock it; or do a SELECT FOR UPDATE
user.do_expensive_thing_to_related_objects()
</code></pre>
This example is somewhat trivial, but in most cases I've seen you can fetch your objects outside of the transaction, compute your expensive thing, and then lock your row for the individual item you're working on (with a sanity-check that your calculation inputs haven't changed, e.g. check the last_modified timestamp is the same, or better that the values you're using are the same).<p>I've considered simply configuring the DB connection with a very short connection timeout (something like 5 seconds) to prevent anyone from writing a query that performs badly enough to interfere with other tables' locks.<p>Anyone tried and failed/succeeded in making this approach work?<p>The other subject that's woefully underdeveloped is writing tests for migrations; ideally I want to (in staging) migrate the DB forwards, run all the e2es and smoke tests with the pre-migration application code, migrate back (to test the down-migration), run the e2es again, and then really migrate forwards again. That would cover the "subtly broken deleted field" migration problem.<p>But how do we test that our migrations behave correctly in the face of long-running transactions? I.e. what's the failing test case for that bug?
Great post. I agree that we don't need automated post-deploy migrations. We just need automated pre-deploy migrations. Post-deploy migrations, for example to delete an unused column, can be implemented as pre-deploy migrations in a subsequent commit.
You can also automatically generate some migrations with a proof search in linear logic, the way the beam[1] project does.<p>[1] <a href="https://tathougies.github.io/beam/schema-guide/migrations/#automatic-migration-generation" rel="nofollow">https://tathougies.github.io/beam/schema-guide/migrations/#a...</a>
In postgresql if you are using prepared statements and are doing a 'select star' and drop or add a column then the prepared statement will start failing. This is kind of bad when you are doing transactions because the bad statement will taint your transaction and you will need to restart from the beginning. Select star is incompatible with prepared statements and postgresql which might also explain why SQL Alchemy explicitly names columns in select statements. Rails will do 'Select star' so prepared statements are the first thing I turn off in Rails/Postgresql projects. [Maybe they have a fix now?]
How did it come to be that some portion of the industry use the term "migration" to describe changes/updates to a database?<p>As far as I can tell, it's a really poor fit. It generates the expectation that movement of existing schema + maybe data from one host to another or one environment to another. What's usually happening instead is essentially a schema diff / mutation.
I'm totally in love with Django's way of migrating databases.<p>It's not for every project, certainly, and you sometimes need to work around limitations of the ORM. And of course some people don't like ORMs in the first place.
Anyone else hold back on releasing side projects because having to do data migrations with stored user data prevents you from being able to aggressively refactoring your code? Is there a good compromise for this?