TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How does your development team handle database migrations?

269 点作者 abetlen大约 6 年前
What's your team workflow for delivering features that require database migrations. How do you keep migrations from slowing down development as your team grows.

55 条评论

liyanchang大约 6 年前
I&#x27;ve been really happy with how my current company[0] has been doing migrations and I&#x27;ve seen a couple others do it but it seems like it should be more widespread.<p>Database Schema as Code<p>Instead of writing up and down migrations, you define what the end state should look like. Then the computer will figure out how to get here. This is just how the industry started managing server configurations (Puppet) and infrastructure (Terraform).<p>We use protocol buffers so it was pretty straight forward to have a definition of what our tables should look like. We have a script that figures out what the delta is between two states (either proto files or a db) and can calculate the schema migration SQL (e.g. CREATE TABLE, etc).<p>From there, we run it through a safety check. Any unsafe migration (either for data loss or performance issues e.g. DROP TABLE) requires an extra approval file.<p>There&#x27;s no real difference between an up migration and a down migration (except that one tends to result in an unsafe migrations). It&#x27;s calculable at CI time so we can give devs a chance to look at what it&#x27;s going to do and approve any unsafe migrations. API compatability checks enforce that you need to deprecate before you can drop.<p>DML, that is data changes, are handled via standard check in a sql file and CI will run it before the code deploy and after the schema migration.<p>Alembic is the one other place I&#x27;ve seen this concept (a couple others mentioned this) so it&#x27;s not new, but surprised I haven&#x27;t seen it more places.<p>[0] Shameless plug: We&#x27;re hiring if you&#x27;re interested in changing how healthcare is paid for, delivered, and experienced. <a href="https:&#x2F;&#x2F;www.devoted.com&#x2F;about&#x2F;tech-jobs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.devoted.com&#x2F;about&#x2F;tech-jobs&#x2F;</a>
评论 #19881434 未加载
评论 #19882638 未加载
评论 #19882339 未加载
评论 #19881640 未加载
评论 #19882602 未加载
评论 #19885442 未加载
评论 #19882040 未加载
评论 #19886189 未加载
smoyer大约 6 年前
I&#x27;ve read and reread a great article titled &quot;Evolutionary Database Design&quot; on Martin Fowler&#x27;s web-site [0]. This article describes database migrations as being a process. We&#x27;ve found that for complex changes, we&#x27;ll often need a pre-migration and a post-migration (temporally being before the code change and after the code change respectively).<p>We commit the migrations along-side the application code and in our case we use FlywayDB [1]. The only down-side is that this tool doesn&#x27;t perform roll-back operations automatically. You can always do them by writing another migration that goes forward to the past.<p>Another popular DB migration tool is Liquibase [2]. I don&#x27;t have much experience with this tool as it doesn&#x27;t fit our build pipe-line as well but it does support and encourage defining a roll-back for each migration.<p>[0] <a href="https:&#x2F;&#x2F;www.martinfowler.com&#x2F;articles&#x2F;evodb.html" rel="nofollow">https:&#x2F;&#x2F;www.martinfowler.com&#x2F;articles&#x2F;evodb.html</a><p>[1] <a href="https:&#x2F;&#x2F;flywaydb.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;flywaydb.org&#x2F;</a><p>[2] <a href="http:&#x2F;&#x2F;www.liquibase.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.liquibase.org&#x2F;</a><p>EDIT: HN is the new StackOverflow? I think this is a really important question for development teams and yet I could see it being closed due to the &quot;Questions asking for tool or library recommendations are off-topic for Stack Overflow ...&quot; rule. Sad!
评论 #19881869 未加载
评论 #19885184 未加载
评论 #19881394 未加载
评论 #19881852 未加载
1337shadow大约 6 年前
It doesn&#x27;t matter what tool you use, as long as you have automated migrations as part of the automated deployment process.<p>A lot of the implementations look like this:<p>create a migrations directory; add an initial migration script in it; make a migrate command to execute before service starts but after the backup.<p>The migrate command recipe: create a migrations table in the db if it doesn&#x27;t exist, otherwise fetch the list of migrations that have been applied inside this database; then, apply the initial migration script if it&#x27;s name is not found in the said migrations table, and insert its name in the migrations table so that it will not be executed again in this database by the migrate command.
评论 #19883260 未加载
theclaw大约 6 年前
We use dbup[0]. Its philosophy[1] is that you should treat your DB changes like code changes in source control, and only perform up migrations. I agree with this. We previously spent a lot of time and effort writing down migrations that were never, ever used. If you need a down migration, take a backup before running your up migrations.<p>We&#x27;re a C# shop. Our DB migration scripts are simply named with a datestamp and a change description. The scripts are added to a console exe project as embedded resources and everything is committed to source control. These exe projects are very thin and contain a single line of C# code that passes the command line arguments directly into an in-house DB deployment library which is installed via nuget. This library handles command line parsing and executing dbup in various ways, using the scripts embedded in the calling assembly.<p>The result is a simple executable file that, depending on its command line, can upgrade the DB directly, preview which scripts would run against a particular DB instance, test the DB deployment against a clean SQL Server LocalDB instance, or generate SQL scripts for DBAs.<p>One nice feature is that the exe can also return the connection string of the SQL Server LocalDB test instance to external code as a test fixture. We can use this to directly unit test our repository types against a real, clean database. When the test fixture is disposed by XUnit, the temporary DB is cleaned up and removed.<p>The console projects are built and tested as part of our CI builds, then pushed to Octopus Deploy[2] as packages. The Octopus deployment process simply runs the executable and passes it the connection string of the DB to update.<p>[0] <a href="https:&#x2F;&#x2F;dbup.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dbup.github.io&#x2F;</a><p>[1] <a href="https:&#x2F;&#x2F;dbup.readthedocs.io&#x2F;en&#x2F;latest&#x2F;philosophy-behind-dbup&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dbup.readthedocs.io&#x2F;en&#x2F;latest&#x2F;philosophy-behind-dbup...</a><p>[2] <a href="https:&#x2F;&#x2F;octopus.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;octopus.com&#x2F;</a>
评论 #19882546 未加载
评论 #19882889 未加载
barrkel大约 6 年前
We use Rails migrations, which means incremental deltas rather than automatic derivation of the result. Usually, our more interesting migrations include data transformation, and not merely adding &#x2F; removing tables, columns and indexes. For example, perhaps a field needs to be denormalized on one of its associations to speed up queries, or a 1:1 relationship now needs to become 1:n. I think it&#x27;s less easy to build declarative tooling around this.<p>We have a tenanted architecture, so we need to run the same migrations on lots of different databases of wildly different sizes and data complexity. We test migrations by restoring a snapshot of a sample of customers and running the migration through, and some smoke tests before and after that can be compared.<p>More recently, migrations are becoming a release bottleneck. That is, they take too long to run within a downtime window (fintech, so Sunday afternoon is when releases go out). We&#x27;re looking at building tooling around Percona OSC, and using triggers to maintain data invariants for data migrations until code is able to catch up.<p>Migrations aren&#x27;t what really slows us down though. Instead, it&#x27;s data volume, and the difficulty in writing efficient queries while also providing a flexible UI experience.
jolmg大约 6 年前
Would be cool to have git for databases.<p>&quot;oh no! our migration deleted columns without re-representing that data in the new manner, and our users have already done changes to the database so we can&#x27;t simply restore from a backup!&quot;<p>quick!<p><pre><code> dbgit checkout -b fixed-migration before-original-migration # *run fixed migration* dbgit rebase --onto fixed-migration after-original-migration master </code></pre> day saved!<p>If only it were so simple.
评论 #19881351 未加载
评论 #19882233 未加载
评论 #19881370 未加载
评论 #19881306 未加载
gigatexal大约 6 年前
By hand and with careful consideration. Nope wait we use Alembic. It’s actually pretty good. I like the notion of not doing any data destroying migrations. For example if you are adding a column that replaces a different one keep them both. Then at a later time when no code paths touch the old column and that can be proven drop the column that was deprecated. It’s safer that way. But I’ve not seen this done in practice just something I’ve been thinking about.
评论 #19881290 未加载
GlennS大约 6 年前
I&#x27;m skeptical of tools that claim to handle schema migrations automatically or declaratively. I&#x27;ve had them fail in the past at critical moments. I also don&#x27;t believe that they can handle the most important cases.<p>I think that numbered sequential migration steps are still the correct way to go. They should be written at the time the feature&#x2F;change is developed, ideally by the person doing that development.<p>Whether they are in SQL or some sort of ORM thing probably doesn&#x27;t matter that much.<p>I don&#x27;t know how to handle branching&#x2F;merging issues with this approach beyond &quot;pay attention, be careful, talk to your colleagues about database changes you&#x27;re making&quot;.<p>At my first company we stored the whole schema instead of the incremental steps to get there. We used SQL Delta to synchronize the real database to what we wanted.<p>SQL Delta is a good tool, but one time we ran into a case it couldn&#x27;t handle during a big deployment at a large hospital and had to abort and roll everything back. Very embarrassing. We switched to incremental scripts after that.<p>SQL Delta is still a good tool if you have lots of views, user defined functions, or stored procedures. As long as they aren&#x27;t needed for the migration itself, you can just synchronize all of those at the end.<p>It&#x27;s also a great tool for actually inspecting if the result of your migration is what you expected.<p>I do like the idea of having easy access to the full schema text as well as the migrations (someone mentioned that Rails does this). Ideally I think this should be generated from the migrations and not checked into version control.
sakopov大约 6 年前
At my .NET shop we use FluentMigrator [1] which allows you to just run up&#x2F;down migrations. Our deployments are completely automated and, in all honesty, while this has been a great and a painless way to manage migrations there really is no silver bullet. Minor releases are dead simple. Larger releases require quite a bit of planning to make sure things are deployed in correct order and SQL updates are backwards-compatible (at least during the deployment window) so that we don&#x27;t incur downtime or completely break things. The only time we ever had to roll back was when one of these large releases deployed a non-backwards-compatible SQL script which caused us a lot of bad data during 5 minute deployment window. So, this is usually a good motivating factor to try to deploy slim releases as frequently as possible. And if you do that you&#x27;ll appreciate it when your DB migrations become boring and mundane things that just happen on auto-pilot.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;fluentmigrator&#x2F;fluentmigrator" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;fluentmigrator&#x2F;fluentmigrator</a>
评论 #19883743 未加载
hestefisk大约 6 年前
I hand code my migrations for Postgres. When starting a project a create a base.sql file. Then for first database change I create db&#x2F;000001-&lt;name of change&gt;.sql, db&#x2F;00002 [...] and so forth. When I want to create a new database I just run psql&lt;base.sql followed by psql&lt;db&#x2F;*.sql. Very simple but extremely effective. I don’t think any tool can handle migrations better than hand coded sql. Sometimes I use PL&#x2F;PgSQL to handle procedural data migrations, which cannot be solved using normal sql.
评论 #19885034 未加载
评论 #19884673 未加载
评论 #19884689 未加载
alanfranz大约 6 年前
What&#x27;s your precise problem?<p>Migrate in a backwards compatible manber, so that version N of the app works with N+1 schema (eg add a column, but don&#x27;t destroy existing ones, use triggers to keep data aligned). When all nodes for an app are are at N+1, you can make a new version with destructive changes (that would break N but not N+1). There&#x27;s a Fowler article about this.
评论 #19881939 未加载
darkr大约 6 年前
We wrote a tool that pulls in several open source tools, the most important of which is sqitch[1] - a beautifully simple Perl based tool for managing migrations that supports rollbacks and verifications - all written in SQL.<p>All hooked up into CI, as well as being available from the CLI for developers, so they can open a PR and get feedback, as well as seeing what locks are required for a given migration.<p>1: <a href="https:&#x2F;&#x2F;sqitch.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sqitch.org&#x2F;</a>
rocgf大约 6 年前
For Java projects, the most common one tends to be Flyway, in my experience. There&#x27;s also Liquibase that I&#x27;ve heard of, but never used.<p>Flyway is okay in my experience. Can&#x27;t complain about it, but I can&#x27;t praise it either, it just does what you&#x27;d expect.
评论 #19881392 未加载
frankwiles大约 6 年前
Django migrations it’s a truly great tool.
评论 #19881801 未加载
archarios大约 6 年前
At my previous job we either didn&#x27;t migrate and wrote application level transforms that would update records as they were encountered by users (mongodb) or we had a custom built migration system that ran JavaScript snippets on our shards. The migration system was miserable to work with and it was hard to debug the code on stage in such a way that would allow us to anticipate whatever might be on prod...
评论 #19880835 未加载
config_yml大约 6 年前
rails db:migrate<p>It’s always one of the first things I miss when I have to work on a non-rails codebase.
louthy大约 6 年前
We use the RedGate SQL compare tools [1] to compare our new schema to our old one and auto-apply the diffs to the production DB (this is done automatically by our deployment process).<p>To reduce the chance of error we don’t destroy columns or tables.<p>Our application then has an update step which runs on startup for any data migrations (or new data additions), and then updates a version number stored in the DB. The data migrations are super rare.<p>This all means we can migrate from any past version to our latest one: because we have all previous schemas stored in git and n update functions in our app that can walk the versions<p>It’s worked reliably for over a decade and is pretty much entirely pain free<p>[1] <a href="https:&#x2F;&#x2F;www.red-gate.com&#x2F;products&#x2F;sql-development&#x2F;sql-compare&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.red-gate.com&#x2F;products&#x2F;sql-development&#x2F;sql-compar...</a>
评论 #19881504 未加载
评论 #19882125 未加载
Myrmornis大约 6 年前
Django. Django generates the schema migrations automatically from the changed table schema definitions that the developer used when testing their branch, and those rarely cause problems. Data migrations need to be tested against staging DBs with realistic data. But neither is really a major pain point: individual developers create and commit the migration files while preparing their branches for review.
评论 #19882325 未加载
评论 #19881912 未加载
tln大约 6 年前
Migrate during continuous deployment.<p>We use a staging environment, so CD deployed migrations are always run at least once before running on production.<p>Migrations have to be written so that currently deployed code will work after the migration. Eg, to rename a column, add+copy, deploy, then in a second migration drop the column.
sylvain_大约 6 年前
There is a great writing from Stripe which explains their process for database migration : - Dual writing to the existing and new tables to keep them in sync. - Changing all read paths in our codebase to read from the new table. - Changing all write paths in our codebase to only write to the new table. - Removing old data that relies on the outdated data model.<p><a href="https:&#x2F;&#x2F;stripe.com&#x2F;fr&#x2F;blog&#x2F;online-migrations" rel="nofollow">https:&#x2F;&#x2F;stripe.com&#x2F;fr&#x2F;blog&#x2F;online-migrations</a>
jakswa大约 6 年前
Depends on the database, in my opinion. Each one has quirks to be mindful of, depending on the amount of data you&#x27;re migrating&#x2F;touching.<p>For example, if you accidentally put a &#x27;default&#x27; on a column when you add it to postgres, it will lock the entire table while it rewrites every row, inserting that default value.<p>Another common postgres blunder is creating indexes on big tables without using &#x27;concurrently&#x27;. This also locks the table for writing and you&#x27;ll have a bad time.
评论 #19882772 未加载
sebringj大约 6 年前
The one pain is having to manually create a cut off period for migration scripts, meaning when to start fresh from a single schema and restart the migrations again. This is basically free food for some incubator at HN so if they are going to make some breakthrough product, make it easier to not ever have to worry about that and it will be worth me posting this.
Foober223大约 6 年前
On my team we use numbered sql scripts. We restore a production db on our local boxes. This is the starting point. The numbered scripts are executed against this. So the actual deployment is getting tested out locally all the time. This is dead reliable, and handles tricky data transformation that would fail under a declarative&#x2F;calculate approach.
bahador大约 6 年前
I&#x27;ve used Flyway[0] at my current and previous shops. It seems to work well.<p>[0] <a href="https:&#x2F;&#x2F;flywaydb.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;flywaydb.org&#x2F;</a>
GordonS大约 6 年前
Using DbUp, which is basically a library that you use to wrap SQL scripts in a console app.<p>This gives you full control over your migrations, while still being incredibly simple - it&#x27;s just SQL!<p>For views, sprocs and functions, we don&#x27;t use sequential migration scripts, instead opting for idempotent creation. This also means it&#x27;s easy to see the evolution of views etc when looking in source control.<p>In the past I&#x27;ve used Entity Framework and Entity Framework Core migrations, and hated them both - I&#x27;ll never be a fan of codegen, but aside from that they sometimes generated <i>wrong</i> code that had to manually adjusted, and you also quickly end up with hundreds of scripts.<p>I like DbUp very much.
snicker7大约 6 年前
My team works with five database systems containing more or less the same data because we &quot;switch&quot; to a new DB every year without investing enough in migrating off the old one.
评论 #19884220 未加载
was_boring大约 6 年前
I don&#x27;t think there is a way to change the database without slowing down development. After all, the data is the most valuable thing you have.<p>Besides that, I tend to follow the same general principals RainforestHQ presented in 2014 <a href="https:&#x2F;&#x2F;www.rainforestqa.com&#x2F;blog&#x2F;2014-06-27-zero-downtime-database-migrations&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.rainforestqa.com&#x2F;blog&#x2F;2014-06-27-zero-downtime-d...</a>
stock_toaster大约 6 年前
We use migrate[1].<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;golang-migrate&#x2F;migrate" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;golang-migrate&#x2F;migrate</a>
评论 #19883568 未加载
tluyben2大约 6 年前
We have been doing migrations with our internal ORM for many years now; first it was written in Perl, then ported to Java, later ported to PHP, after that ported to C# (past ~10 years) and it does up&#x2F;down migrations with safe guards in place. It (the underlying algorithm&#x2F;heuristics) has been working fine for around 20 years on around 1000 projects (client consultancy mostly) in that time. Lessons learned (opinionated&#x2F;IMHO, no sweeping wisdom attempt); a) database changes should be in code so there is no discrepancy between code + db b) your tooling (in this case our ORM) should figure out how to get from the current state to the new state; it should not be a manual job (because you will make mistakes) c) migrations including rollback should be fully automated (we only had issues when people tried to do clever things manually) d) have common sense safe guards in place; do not try to be too clever to save a few MB&#x27;s of disk space; for instance, deploy will fail if you attempt to remove a column; you can only do that manually.
stephenr大约 6 年前
We used to use dbschema (<a href="https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;dbschema&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;dbschema&#x2F;</a>) to apply migrations generated by SchemaSync (<a href="http:&#x2F;&#x2F;mmatuson.github.io&#x2F;SchemaSync&#x2F;" rel="nofollow">http:&#x2F;&#x2F;mmatuson.github.io&#x2F;SchemaSync&#x2F;</a>) using a helper script to simplify capturing a migration from a known base of previous migrations.<p>After submitting a number of bug fix and feature patches to the upstream projects I ended up writing a (IMO better) tool to apply the migration scripts (<a href="https:&#x2F;&#x2F;bitbucket.org&#x2F;koalephant&#x2F;mallard" rel="nofollow">https:&#x2F;&#x2F;bitbucket.org&#x2F;koalephant&#x2F;mallard</a>). We currently still rely on SchemaSync for that part, but it’s been more reliable and it’s ultimately a tool for developers who review the generated sql anyway - the tool to apply them needs to run automated on remote environments.
Guillaume86大约 6 年前
.NET&#x2F;MSSQL shop here:<p>Our source controlled files contain the database definition:<p>- a dacpac file generated by SSDT for the schema<p>- sql scripts for seed data (which changes a lot in our case), generated by a custom CLI tool (uses bcp.exe).<p>We have a &quot;dev&quot; database that is basically the &quot;master&quot;. We have a CLI tool to generate the database definition files from the dev database. When someone make&#x2F;need changes in the database, he makes them in the dev DB then call the CLI tool to update the source files.<p>When publishing a version the dacpac and data script are included. Migrations are created on demand (no need to go through each version, skipping versions is common). Our migration tool create reference databases of the source and target versions, then generate a schema diff script with SSDT and a data diff script with ApexSQL Data Compare. We can review&#x2F;editthe migrations scripts before applying migrations.<p>It works well enough that we run automated migrations at night (with backup before&#x2F;restore after in case of issue).
sqldba大约 6 年前
Scripts.<p>And I hate to tell you but despite what you’ll read on HN every single company I’ve seen and every single vendor I’ve dealt with - is using scripts.<p>Just simple scripts, in source control, amended by hand, lord hopes applied as part of a CI system, often without any kind of version numbering (so once it’s in the database you don’t know where it came from) and with no rollback.<p>Declarative migrations do exist. If it’s in .NET then it’s invisible to me, but I imagine it’s pretty rare. Using SSDT (Microsoft) for tooling is possible but has so many edge cases and is poorly documented so coming up with an architecture requires the kind of expert where there may literally only be a dozen in the world, so it isn’t done outside of demos that fail and get thrown out the second you’d ever try to implement them.
bungie4大约 6 年前
I recently had to update 3 db machines with the updated schemas from one, and the data from a third. Then copy the finished version over to the first and finally the third.<p>Turns out Visual Studio has a &#x27;diff&#x27; generator for both schema and data. Holy hell that worked the treat.
namelosw大约 6 年前
We use fluentMigrator for .Net, but basically the workflow is similar to Rails migration. It is working fine and did not slow us down.<p>What slows us down and related to this topic is cross-database migrations. We have a bunch of microservices in different codebases. Sometimes we need to adjust bound contexts which move the responsibilities from services to services, we found it&#x27;s much harder to migrate between different databases behind those services. It&#x27;s either hard to keep track in the same repo and&#x2F;or too slow to migrate large volume of data across hosts. In the end we wrote some optimized SQL to do that. Maybe monorepo could help but this doesn&#x27;t happen too much, and we haven&#x27;t start trying it yet.
tshanmu大约 6 年前
we have had good success with liquibase[0] and flyway[1].<p>[1] <a href="http:&#x2F;&#x2F;www.liquibase.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.liquibase.org&#x2F;</a> [2] <a href="https:&#x2F;&#x2F;flywaydb.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;flywaydb.org&#x2F;</a>
systematical大约 6 年前
We use <a href="https:&#x2F;&#x2F;github.com&#x2F;cakephp&#x2F;phinx" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cakephp&#x2F;phinx</a> works pretty well. It just runs on deployment. Thankfully I&#x27;ve never had to try using the rollback feature.
sheeshkebab大约 6 年前
- Flyway for relational databases, and<p>- flyway-like go based homegrown process for Cassandra (that includes support for executing go scripts for data migrations etc)<p>Above work pretty well - for many, many microservices and continuous deployments all the way to prod.
tylerFowler大约 6 年前
I feel like Terraform would be a fantastic tool for this but never found the time to look into what it would take. Less so for complex data changes maybe (or maybe not) but for things like basic schema changes certainly.
sebazzz大约 6 年前
We use RedGate SQL Source Control. However, we also deploy our applications using MSDeploy and SQL Source Control has no good solution for that. We stitch the migration scripts together manually. Also, no matter how good you control it, your database gets out of sync with the migrations after a while. We also have an integration test which restores a base database and applies the migrations of the current branch on top of it.<p>We&#x27;re going to research DACPACs to deploy databases. We hope it will be better.
aerojoe23大约 6 年前
.net has entity framework an ORM has has migrations. It isn&#x27;t problem free, but we found workable solutions to our issues.<p>By default EF will throw an error if the model (in code) and the database are out of sync. There is a setting you can set to tell the ef not to care about the version, but then you have to take the responsibility of making sure the old code will run against the new db version.<p>Most of our stuff is adding a new column or table so the old code doesn&#x27;t rely on it and everything goes fine anyway.
评论 #19881501 未加载
评论 #19881793 未加载
speedplane大约 6 年前
Is this question asking how to move from one schema to another, or one database backend to another? Most answers here focus on schema changes but I&#x27;m not sure that was the question.<p>Schema changes are relatively straightforward, there are plenty of tools that can help. Changing the database backend is a different story altogether. There are so many unknown unknowns when changing databases, that it&#x27;s generally best to avoid it if you can.
lmiller1990大约 6 年前
I work at a Perl shop and we use Sqitch [1]. It helps manage your migrations, you write three raw SQL scripts per change: deploy, verify and revert. The tool works with any DB or backend, since it only does the schema.<p>It does NOT integrate with your server framework so you need to figure an ORM solution out, independent of Sqitch.<p>[1] <a href="https:&#x2F;&#x2F;sqitch.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sqitch.org&#x2F;</a>
ajcodez大约 6 年前
I wrote my own schema as code library and then switched back to normal migrations with no down method. It takes some care to not delete anything for a while until it’s definitely not needed ever again.<p><a href="https:&#x2F;&#x2F;www.ajostrow.me&#x2F;articles&#x2F;how-to-run-migrations" rel="nofollow">https:&#x2F;&#x2F;www.ajostrow.me&#x2F;articles&#x2F;how-to-run-migrations</a>
quickthrower2大约 6 年前
Previous shop we used entity framework migrations in .net. So migrations are applied by octopus deploy.<p>This place we do it manually but because of how the app is architected migrations are rare. If we migrate we try to make the code compatible with before and after schemes.
based2大约 6 年前
<a href="https:&#x2F;&#x2F;github.com&#x2F;goldmansachs&#x2F;obevo" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;goldmansachs&#x2F;obevo</a><p><a href="https:&#x2F;&#x2F;www.schemacrawler.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.schemacrawler.com&#x2F;</a>
wesleytodd大约 6 年前
If anyone is looking for a javascript specific solution, I maintain a module originally written (and abandoned) by TJ.<p>$ npm i migrate<p><a href="https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;migrate" rel="nofollow">https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;migrate</a>
评论 #19881837 未加载
wenbin大约 6 年前
Django and other modern web development framework handles db migrations pretty well.<p>$ python manage migrate
paulddraper大约 6 年前
We use dbmate [1].<p>SQL migration management with in-database version tracking.<p><i>Super simple</i>, but works for us.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;amacneil&#x2F;dbmate" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;amacneil&#x2F;dbmate</a>
pier25大约 6 年前
We use <a href="https:&#x2F;&#x2F;github.com&#x2F;rubenv&#x2F;sql-migrate" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;rubenv&#x2F;sql-migrate</a> for Go<p>You basically write SQL queries for up and down.
ilaksh大约 6 年前
For my current project I have not needed to migrate data.<p>I have a system called timequerylog and a tool tql-cli where data is just appended to JSONL files organized by key, date, and time.
gregmac大约 6 年前
Ohh, this is a favorite topic of mine and I&#x27;m of the opinion there&#x27;s no clear best solution (possibly good market opportunity here), only a series of trade-offs.<p>----<p>In one project, we use DBGhost (for MS SQL Server). It&#x27;s like RedGate SQL Compare, but in deployable, self-contained executable form. It does a complete schema sync, so internally we run on every build, and externally can upgrade from every previous release, without the pesky &quot;in between&quot; evolution you tend to get with ordered migration scripts. It&#x27;s run as part of our upgrade process for every app version, and our deploy package is built from the &#x27;initial database create&#x27; SQL script in source control.<p>To make a schema change such as adding a new column, you modify the relevant `CREATE TABLE` script, commit it to source, and that&#x27;s it.<p>We also use a hand-built pre- and post-deploy script to do anything tricky that can&#x27;t be automated (renaming a column, copying&#x2F;converting data from one column to another, etc). Importantly, these scripts are idempotent (eg: `if (old_column_exists) { start transaction; create new column; copy+transform data; drop old_column; commit transaction; }`). We generally avoid major changes like this as much as we can, but it&#x27;s possible when necessary. We also have DBGhost configured not to drop anything (to avoid mistakes, or dropping customer customization that they do even though our support contract explicitly says not to), and instead write those by hand.<p>This process has been in many dozens of customer-facing releases -- including on-premise customers -- for several years, and &#x27;just works&#x27;. DBGhost is a clunky app, but our interaction with it is now entirely scripted (via either build or deployment scripts), and the compare engine part of it is actually very good.<p>----<p>In another pretty simple app (that runs on our cloud infrastructure only, with &#x27;production&#x27; deployments mirrored in a couple different regions, and usually a single &#x27;dev&#x27; deployment though sometimes there are other temporary ones) we opted to only use an idempotent, hand-written script, executed as part of deployment. It has `create table if not exists` statements to setup from scratch, and it also has the transforms for changes done over time (`create index if not exists;` `drop column if exists` etc). We periodically remove the transform statements to clean it up after all deployments have been updated past that point.<p>Even though it&#x27;s manual, it&#x27;s actually quite easy to maintain, so long as you&#x27;re careful about ensuring it&#x27;s idempotent. The nice part is it typically gets deployed to the dev infrastructure multiple times during the course of working on something, so if there&#x27;s a problem it becomes obvious very quickly when the deployment fails.<p>----<p>There&#x27;s also another app which uses the more traditional ordered migrations files, which I liked at first but over time I find it annoying. Deploying from scratch installs a really old initial database schema, then proceeds to apply dozens of modifications, many of which overwrite previous modifications.<p>----<p>I&#x27;ve also worked on an ordered migrations file app where there was a separate &#x27;create&#x27; step for the initial creation (instead of running through each migration). The first time I deployed from scratch I found a bunch of problems where the create script wasn&#x27;t 100% synchronized with the migrations, but also was synchronized enough that it wasn&#x27;t possible to &#x27;fix&#x27; by running each migration (one of the early ones failed). The only fix was to go through by hand and manually run the relevant migration files. I&#x27;m sure there can be better practices to help prevent this (eg some compare step on build), but this happened years ago and still sticks with me as a terrible way to do migrations.
thrower123大约 6 年前
SQL scripts. It is painful.
评论 #19882364 未加载
ben_jones大约 6 年前
Poorly
mbostleman大约 6 年前
SSDT &#x2F; DACPAC
wutman大约 6 年前
This doesn&#x27;t fit everywhere, but... 1) Never modify existing tables (okay, almost never). 2) Add new tables at will. 3) Pay a consultant to write PL&#x2F;SQL that glues it all together and blame them for any and all issues.
评论 #19883601 未加载