Background:<p>- About 9 months ago we burned 2 separate days handling migrations on Supabase. Supabase CLI version and/or local Docker version change made it impossible for my teammate to bring up the db on their local. Migrations couldn't be applied. Fresh pull didn't work either. I thought this was a fluke.<p>- Working with a new team/stack (flask, alembic, postgres). Twice in the last month I had to change the `alembic_version` in the db just to get the migrations to apply. I think we found ourselves in this situation because we rewrote the history by having the new new migration point to a previous revision (HEAD^) instead of the last one (HEAD). Not sure what our motivation was.<p>- What am/are I/we doing wrong?
- What's the right/robust way to do this?
- How are teams of size >2 handling this?
- Isn't this scary?
> How are teams of size >2 handling this?<p>Directory with .sql files starting with a number. Each file contains a set of changes to the db. The db has a table with the number that was applied last. To migrate your db you check if you have a file with a number that is higher than the one in the db. Then you apply that file to your db. That’s it.<p>Sounds like you are working in a way that is not intended by your tool / framework.
I use dbmate and for a deployment I package up the migration files into a docker container which runs and then applies the changes.<p>Firstly I use a devcontainer for development so I know my versions line up.
dbmate uses .sql files which also makes things a lot easier.<p>You can see my setup here <a href="https://github.com/bionic-gpt/bionic-gpt">https://github.com/bionic-gpt/bionic-gpt</a><p>Have a look at the CONTRIBUTING.md to get an idea of the dev workflow.
> because we rewrote the history<p>Don't rewrite shared history.<p>As for how migrations are ran. Last place, each team/service had their own data store. Numbered sql files and forward compatible changes only. Sometimes this meant application code would have to write to two locations and in a later update change the read location.<p>Current gig, everything is managed by the django orm. Great when you have a single db; sucks to scale out to sharded tables, sharded db instances, and is a pain for migrating to new data stores.
I never experienced anything like this in Rails or Phoenix (Ecto) land.<p>The only time I had problems with database migrations were in javascript with Prisma. But that was because we had a local dev dataset, and the CEO had gone into past migration files and edited them manually. Prisma exploded every time and it was a pain to fix.<p>It sounds like your pains are also coming from people manually editing past migration files and committing them. Don't do that!