I'm appalled at the way some people here receive an honest postmortem of a human fuck-up. The top 3 comments, as I write this, can be summarized as "no, it's your fault and you're stupid for making the fault".<p>This is not good! We don't want to scare people into writing less of these. We want to encourage people to write more of them. An MBA style "due to a human error, we lost a day of your data, we're tremendously sorry, we're doing everything in our power yadayada" isn't going to help anybody.<p>Yes, there's all kinds of things they could have done to prevent this from happening. Yes, some of the things they did (not) do were clearly mistakes that a seasoned DBA or sysadmin would not make. Possibly they aren't seasoned DBAs or sysadmins. Or they are but they still made a mistake.<p>This stuff happens. It sucks, but it still does. Get over yourselves and wish these people some luck.
> Computers are just too complex and there are days when the complexity gremlins win.<p>I'm sorry for your data loss, but this is a false and dangerous conclusion to make. You can avoid this problem.
There are good suggestions in this thread, but I suggest you use Postgres's permission system to REVOKE DROP action on production except for a very special user that can only be logged in by a human, never a script.<p>And NEVER run your scripts or application servers as a superuser. This is a dangerous antipattern embraced by many and ORM and library. Grant CREATE and DROP to non-super users.
You have to put a lot of thought into protecting and backing up production databases, and backups are not good enough without regular testing of recovery.<p>I have been running Postgres in production supporting $millions in business for years. Here's how it's set up. These days I use RDS in AWS, but the same is doable anywhere.<p>First, the primary server is configured to send write ahead logs (WAL) to a secondary server. What this means is that before a transaction completes on the master, she slave has written it too. This is a hot spare in case something happens to the master.<p>Secondly, WAL logs will happily contain a DROP DATABASE in them, they're just the transaction log, and don't prevent bad mistakes, so I also send the WAL logs to backup storage via WAL-E. In the tale of horror in the linked article, I'd be able to recover the DB by restoring from the last backup, and applying the WAL delta. If the WAL contains a "drop database", then some manual intervention is required to only play them back up to the statement before that drop.<p>Third is a question of access control for developers. Absolutely nobody should have write credentials for a prod DB except for the prod services. If a developer needs to work with data to develop something, I have all these wonderful DB backups lying around, so I bring up a new DB from the backups, giving the developer a sandbox to play in, and also testing my recovery procedure, double-win. Now, there are emergencies where this rule is broken, but it's an anomalous situation handled on a case by case basis, and I only let people who know what they're doing touch that live prod DB.
> after a couple of glasses of red wine, we deleted the production database by accident<p>> It’s tempting to blame the disaster on the couple of glasses of red wine. However, the function that wiped the database was written whilst sober.<p>It was _written_ then, but you're still admitting to the world that your employees do work on production systems after they've been drinking. Since they were working so late, one might think this was emergency work, but it says "doing some late evening coding". I think this really highlights the need to separate work time from leisure time.
I had a narrow escape once doing something fancy with migrations.<p>We had several MySQL string columns as long text type in our database but they should have been varchar(255) or so. So I was assigned to convert these columns to their appropriate size.<p>Being the good developer I was, I decided to download a snapshot of the prod database locally and checked the maximum string length we had for each column via a script. Using this script it made a migration query that would alter column types to match their maximum used length keeping the minimum length as varchar (255).<p>I tested that migration and everything looked good, it passed code review and was run on prod. Soon after we start getting complaints from users that their old email texts have been truncated. I then realize the stupidity of the whole thing, the local dump of production database always wiped out many columns clean for privacy like the email body column. So the script thought it had max length of 0 and decided to convert the column to varchar(255).<p>I realize the whole thing may look incredibly stupid, that's only because the naming for db columns was in a foreign european language so I didn't know even know the semantics of each column.<p>Thankfully my seniors managed to restore that column and took the responsibility themselves since they had passed the review.<p>We still did fix those unusually large columns but this time by simple duplicate alter queries for each of those columns instead of using fancy scripts.<p>I think a valuable lesson was learned that day to not rely on hacky scripts just to reduce some duplicate code.<p>I now prefer clarity and explicitness when writing such scripts instead of trying to be too clever and automating everything.
Just my 2 cents. I run a small software business that involves a few moderately-sized databases.
The day I moved from a fully managed hosting to a Linux VPS, I have crontabbed a script like this to run several times a day:<p><pre><code> for db in `mysql [...] | grep [...]`
do
mysqldump [...] > $db.sql
done
git commit -a -m "Automatic backup"
git push [backup server #1]
git push [backup server #2]
git push [backup server #3]
git gc
</code></pre>
The remote git repos are configured with <i>denyNonFastForwards</i> and <i>denyDeletes</i>, so regardless of what happens to the server, I have a full history of what happened to the databases, and can reliably go back in time.<p>I also have a single-entry-point script that turns a blank Linux VM into a production/staging server. If your business is more than a hobby project and you're not doing something similar, you are sitting on a ticking time bomb.
Happens to all of us. Once I required logs from the server. The log file was a few gigs and still in use. so I carefully duplicated it, grepped just the lines I needed into another file and downloaded the smaller file.<p>During this operation, the server ran out of memory—presumably because of all the files I'd created—and before I know it I'd managed to crash 3 services and corrupted the database—which was also on this host—on my first day. All while everyone else in the company was asleep :)<p>Over the next few hours, I brought the site back online by piecing commands together from the `.bash_history` file.
This happened to me (someone in my team) a while ago but with mongo. The production database was ssh-tunneled to the default port of the guys computer and he ran tests that cleaned the database first.<p>Now... our scenario was such that we could NOT lose those 7 hours because each customer record lost meant $5000 usd penalty.<p>What saved us is that I knew about the oplog (binlog in mysql) so after restoring the backup i isolated the last N hours lost from the log and replayed it on the database.<p>Lesson learned and a lucky save.
>Note that host is hardcoded to localhost. This means it should never connect to any machine other than the developer machine. We’re too tired to figure it out right now. The gremlins won this time.<p>Obviously, somehow the script ran on the database host.<p>some practices I've followed in the past to keep this kind of thing from happening:<p>* A script that deletes all the data can never be deployed to production.<p>* scripts that alter the DB rename tables/columns rather than dropping them (you write a matching rollback script ), for at least one schema upgrade cycle. you can always restore from backups, but this can make rollbacks quick when you spot a problem at deployment time.<p>* the number of people with access to the database in prod is severely restricted. I suppose this is obvious, so I'm curious how the particular chain of events in TFA happened.
Quote: "Note that host is hardcoded to localhost. This means it should never connect to any machine other than the developer machine. Also: of course we use different passwords and users for development and production. We’re too tired to figure it out right now.<p>The gremlins won this time."<p>No they didn't. Instead one of your gremlins ran this function directly on the production machine. This isn't rocket science, just the common sense conclusion. Now it would be a good time to check those auditing logs / access logs you're suppose to have them enabled on said production machine.
This is bad operations.<p>That it happened meant that there were many things wrong with the architecture, and summing up the problem to “these things happen” is irresponsible, most importantly your response to a critical failure needs to be in the mindset of figuring out how you would have prevented the error without knowing it was going to happen and doing so in several redundant ways.<p>Fixing the specific bug does almost nothing for your future reliability.
> Computers are just too complex and there are days when the complexity gremlins win.<p>Wow. But then again it's not like programmers handle dangerous infrastructure like trucks, military rockets or nuclear power plants. Those are reserved for adults
Are you sure it was the production database that was affected?<p>If you are not sure how a hard coded script that was targeting localhost affected a production database, how do you know you were even viewing the production database as the one dropped?<p>Maybe you were simply connected to the wrong database server?<p>I’ve done that many times - where I had an initial “oh no“ moment and then realized I was just looking at the wrong thing, and everything was ok.<p>I’ve also accidentally deployed a client website with the wrong connection string and it was quite confusing.<p>In an even more extreme case: I had been deploying a serverless stack to the entirely wrong aws account - I thought I was using an aws named profile and I was actually using the default (which changed when I got a new desktop system). I.e. aws cli uses —profile flag, but serverless cli uses —aws-profile flag. (Thankfully this all happened during development.)<p>I now have deleted default profiles from my aws config.
The lack of the seriousness/professionalism of the postmortem seemed odd to me too. So, okay, what is this site?<p>> KeepTheScore is an online software for scorekeeping. Create your own scoreboard for up to 150 players and start tracking points. It's mostly free and requires no user account.<p>And also:<p>> Sat Sep 5, 2020, Running Keepthescore.co costs around 171 USD each month, whilst the revenue is close to zero (we do make a little money by building custom scoreboards now and then). This is an unsustainable situation which needs to be fixed – we hope this is understandable! To put it another way: Keepthescore.co needs to start making money to continue to exist.<p><a href="https://keepthescore.co/blog/posts/monetizing-keepthescore/" rel="nofollow">https://keepthescore.co/blog/posts/monetizing-keepthescore/</a><p>So okay, it's basically a hobby site, for a service that most users probably won't really mind losing 7 hours of data, and that has few if any paying customers.<p>That context makes it make a little bit more sense.
This post is embarrassing. "yeah we were drinking and accidentally nuked the prod DB. Not sure why. Shit happens!" Who would read this and think they should trust this company? Any number of protections could have been taken to prevent this and production access in any state other than fully alert and attentive shouldn't happen unless it is absolutely necessary for emergency reasons
I love this post. This sort of thing happens to everyone, most people just are not willing to be so open about it.<p>I was once sshed to the production server, and was cleaning up some old files that got created by an errant script, one which file was '~'. So, to clean it up, I type `rm -rf ~`.
Ah man, these things happen. One of our developers - very new to elastic - was asked to modify some indexes. Folks were a bit too busy to help or heading out on holiday. One stack overflow answer later... delete and recreate it... and she was off to the races. When the test was tried, it looked like things still worked. A quick script did the same to stage and prod, in both data centers. Turns out that is not a great way to go about it. It deleted the documents. We got lucky, as we still had not killed off the system we were migrating off of and it only took three days of turn and burn to get the data back on the system.<p>So many lessons learned that day. I trust her with the master keys at this point, as nobody is more careful with production than her now. :)
RDS is so very worth paying for this type of issue (in many cases, obviously $60 to multiple thousands a month isn’t great for everything).<p>Otherwise having a binlog based backup (or WAL, I guess, but i don’t know PG that well) is critical.<p>The key point there is they provide point in time recovery possibilities (and even the ability to rewrite history).
I had a client who had prod database access due to it being hosted internally. They called up saying "their system is no longer working".<p>After about an hour of investigation, I find one of the primary database tables is empty - completely blank.<p>I then spend the next hour looking through code to see if there's any chance of a bug that would wipe their data and couldn't find anything that would do that.<p>I then had to make "the phone call" to the client saying that their primary data table had been wiped and I didn't know what we did wrong.<p>Their response: "Oh I wrote a query and accidentally did that, but thought I stopped it".
At my job, the company computers are configured to send “localhost” to the local company DNS servers, of which they happily reply with the IP address of the last machine that’s gotten a DHCP lease with the hostname “localhost”. Which happens often. Needless to say, our IT dept isn’t the best.
If you are using postgres, configure it to keep the WAL logs for at least 24 hours.<p>They could have used point-in-time recovery to not lose any data from this at all.
Things like these happen, and we should be compassionate towards them.<p>Often small changes to the structure drastically reduce probability of stuff like this happening.<p>Eg. we use docker to setup test and dev databases and seed from (processed) dumps. When we need to clean our database, we simply put down the docker container. Ie. we do not need to implement destructive database cleanup eliminating structure that could potentially fail.<p>Having policies about not accessing production database directly (and allow the extra time for building tooling around that policy), good preview / staging environments, etc. All fail eliminating structure.
I wouldn’t want to be on the wrong side of a lawsuit, defending drunk employees working on production data. What outrageous recklessness. And how imprudent to admit this to the public. Some things are best kept to yourself. No one needs to know that.
We have something similar with AWS Cognito. If a user signs up but doesn't go through with the verification process, there's no setting to say "remove them after X days". So we have to run a batch job.<p>If I screw up one parameter, instead of deleting only unconfirmed users, I could delete all users. I have two redundant checks, first when the query is run to get the unconfirmed users, and then again checking the user's confirmed status before deleting them. And then I check one more time further down in the code for good measure. Not because I think the result will be different, but just in case one of the lines of code is altered somehow.<p>I put BIG LOUD comments everywhere of course. But it still terrifies me.
Recreate and seed test database is totally ok in RoR world.<p>I think the main reason of this accident is lack of separation between development and operations.
localhost is an abstraction, it's a non-routable-outside-your-machine network...except it's not. It's nothing more than normal TCP traffic except with a message to the OS and other programs that whatever is on that local computer network, you don't want it routed outside the local computer.<p>There's absolutely nothing stopping anything with access to localhost from routing it anywhere that process wants. Does not even take a malicious actor, all kinds of legit programs expose localhost. It's really not something you should use for anything except as a signal to other well-behaving programs that you are using the network stack as a machine-local IPC bus.
A very similar thing happened to living social during their brightest years, but the replication and the backups had failed too.
The oldest valid backup was about a week old.
It took the whole company offline for 2 days. It took a team of 10 people and some extra consultants to come up with a half backed version of the latest database based on elastic cache instances, redis caches and other "not meant to be a backup" services.
It was insane walking in an office that had hundreds of employees and see they all gone while we rebuild this cobbled together db.<p>At one point someone called it "good enough" and they basically had to honor the customer word if they had purchased something and it wasn't there.<p>It was a mess.<p>It was on all major news, and it was really bad press. In the end, they actually had a massive bump in their sales afterwards.
Everyone went to checkout their own purchases and ended up buying something else, and the news was like free ads.<p><a href="https://www.washingtonpost.com/business/capitalbusiness/the-download-livingsocial-goes-down-for-nearly-48-hours-after-critical-database-error/2013/11/15/2012bc4e-4c9b-11e3-9890-a1e0997fb0c0_story.html" rel="nofollow">https://www.washingtonpost.com/business/capitalbusiness/the-...</a>
Hmm, there seems to be some holes in their system. A database might go down for any reason.<p>I also have daily backups, but I write logs (locally and regularly copy from the production server) for all database actions to disk for the purpose of checking through them if something goes wrong, or having the option to replay them encase something like this happens. SO you have your database backups as "save points" and the logs to replay all the "actions" for that day.
The wonderful thing about computers is that they do exactly what they are told to do.<p>The worst thing about computers? They do exactly what they are told to do.
This is my greatest fear when it comes to terraform:<p>> terraform destroy<p>(And either a confirmation or a flag) and everything is deleted.<p>I know you can add some locks but still :/
> Computers are just too complex and there are days when the complexity gremlins win.<p>> However, we will figure out what went wrong and ensure that that particular error doesn’t happen again.<p>How can you say statement 2 just after statement 1 ? Isn't statement 1 just plain acceptance of defeat ?<p>And looking at all the replies here, is this a feel good thread for the mistakes you made ?
I am sorry this happens.<p>> local_db = PostgresqlDatabase(database=database, user=user, password=password, host='localhost', port=port)<p>I am guessing this part. Even though the host is hardcoded as "localhost" , when you do a ssh port-forwarding, the localhost might actually be the real production.
e.g sudo ssh user@myserverip -L 3333:localhost:3306
If you keep configuration in the environment (/etc/default/app-name) rather than in the application package, it's nearly impossible to make this mistake (especially with proper firewall rules). You can even package your config as a deb and keep it encrypted version control.
I once replaced a bunch of customer photos with a picture of Spock as part of a test on my first week on the job.. The dB admin had just overwritten a sales force dev dB from production and a previous developer had hardcoded the IP address of production in the code of a script somewhere..
> Note that host is hardcoded to localhost. This means it should never connect to any machine other than the developer machine.<p>Just to help with the postmortem:<p>1) “localhost” is just a loopback to whatever machine you’re on<p>2) the user and pw are pulled from config<p>So someone was running this from the production server or had the production DB mapped to localhost and ran it with a production config for some reason (working with prod data maybe). The hard coding to localhost will only ensure that it works for the machine it’s called on - in this case the prod server.<p>Things you might do to avoid this in the future include a wide spread of things, the main recommendations I’d have are:<p>1) only put production artifacts on prod<p>2) limit developer access to prod data<p>Best of luck
While I'm very sympathetic to "we accidentally nuked our prod DB" because, let's admit it: we've all been there at some point, I'm also a bit baffled here because I don't think that the problem lies with too much wine, Postgres permissions or scripts on localhosts but the fact that recreating a database by FIRST dropping all tables and THEN recreating them is like deliberately inviting those gremlins in.<p>But, as I said, that happens and blaming doesn't fix anything, so, for the future:<p>1. make a temporary backup of your database
2. create tables with new data
3. drop old tables
"at around 10:45pm CET, after a couple of glasses of red wine, we deleted the production database by accident".
That's not an accident, guys..<p>Stop drink and deploy something on production, especially at late evening time.
A question to the DBA experts from a developer: is there a way in MySQL and Postgres to configure a log specifically for destructive SQL queries so that it's easier to investigate a situation like this? I.e. to log most queries except for usual SELECT/INSERTs.<p>Also, @oppositelock pointed out that WAL would contain the destructive query too. How does one remove a single query from a WAL for replay or how does one correctly use WAL to recover after a 23-hour old backup was restored?<p>Finally, how does one work on the WAL level with managed DBs on AWS or DO if they block SSH access?
I totally sympathize with you and yours, I've made sphincter-clenching mistakes a handful of times during my 20 years of experience.<p>This is an abject lesson that understanding human psychology is actually a huge part of good architecture. Automating everything you do in production with a script that is QA tested prior to use is the best way to avoid catastrophic production outages.<p>It does take a bit longer to get from start to finish, and younger devs often try to ignore it, but it is worth putting a layer of oversight between your employees and your source of revenue.
Could someone explain more what caused the prod wipe? The snip here indicates it is using a 'dev' credential (it is a different pass than prod right?) - how does a db connection occur at all?
This line's the winner for me : "Thankfully nobody’s job is at risk due to this disaster. The founder is not going to fire the developer – because they are one and the same person."
I did the same thing once by accident and thankfully only lost 1 hour of data. The single lowest point of my career and thinking about the details of that day makes my stomach sink even now as I type this.<p>I ran a rake db schema dump command of some kind and instead of it returning the schema of the database, it decided to just completely nuke my entire database. Instantly. It's very easy to fuck up so cover your ass gents, and backup often and run your restores periodically to make you can actually do them in case of an emergency.
I run a replicated ClickHouse server setup, Clickhouse uses zookeeper to enable replication.
The zookeeper instance was not replicated.it was a single node.
The server on which zookeeper was running ran out of hard disk and Clickhouse went into read only mode.
Luckily,no data was lost while this happened because we use RabbitMq to store the messages before it gets written to the db.
Thanks to RabbitMq's ACK mechanism.
In one of my first jobs I deleted the last 30 days of our production data.<p>Shit happens. You learn and try to never repeat it. And share with others so hopefully they learn.<p>Ps. Don't do knee-jerk late at night quick patches. For example don't stop a database that has run out of disk space, try to migrate the data in memory first... And also do proper backup monitoring, and restores. Having 30 days of 0 byte backups is not that helpful. :)
This is what nightmares are made of.<p>> We’ve learned that having a function that deletes your database is too dangerous to have lying around.<p>Indeed, anything that might compromise the data, anything that might involve deletion anyway, should require manual confirmation whether you manage the database or it's a service provided.<p>Sadly, I learned this the hard way too, but at least it was a single column with a non-critical date and not the entire database.
While several other users have posted takeaways for how to prevent this from happening, I'd be interested in if anybody has an idea of how this happened given the code that was posted?<p>Presumably, a managed DB service should essentially never be available on `localhost`. Additionally, it would be very weird for `config.DevelopmentConfig` to return the production database credentials.
I did something similar once - I had fiddled with my /etc/hosts and subsequently connected to the production database without realizing. I dropped a table but thankfully it wasn't much of a deal - the monitoring rang the bell and I recreated it a few seconds later. All that happened was that I had logged out several hundred users.
> Thankfully our database is a managed database from DigitalOcean, which means that DigitalOcean automatically do backups once a day.
Do cloud providers provide a smaller window for backup. Are there better ways to reduce the backup window here for DBs? Would love to understand any techniques folks use to minimize the backup window?
Someone might have copy&pasted it elsewhere and that propagated away. This is why writing code can also be dangerous in open dev. Whoever programmed anything also should be sensible enough to judge their own code whether it could be dangerous in the wild. Once out there (or worse: on stackoverflow) it could wreak havoc.
It takes a great deal of integrity to admit that you deleted a database because you were mucking around in your infra after red wine.<p>And it bodes well for your firm that that doesn't get you fired either.<p>These things happen to the best of us but having dealt with it responsibly and honestly as a team is something you can be proud of IMO.
I love the honesty, self-irony and transparency of the article. It's sad and annoying to see so many young naive devs writing ”oh they are so bad, it will never happen to me”.<p>Yes, people are not perfect and computer systems are complex. Admit it and don't be so overconfident.<p>”Errare humanum est”, prepare your backups.
This postmortem is incomplete: it fails to address the main three roots of the problem:<p>1. This business is too flippant with their write-able production access.<p>2. No user should have DROP DATABASE grants on production.<p>3. Clearly one of their employees was using a port forward to access production.
I still remember how many years ago, when someone in my team told me one Friday afternoon "there's not something like 'undo' for 'drop table', right?".
He spent the weekend recreating the data.
That is why we can only access the prod db from a jump box in our shop.... And even then it's just certain people with less privileges than a sysadmin account. No way you can do this accidentally from your laptop then...
The script also unnecessary complicates things. If it just does the equivalent of rake db:drop, this incident wouldn't have happened, since postgres wouldn't allow a database with active connections to be dropped.
I don't understand how this happened if localhost is hard coded and the password is different. I don't think they fully understand why this happened. At least enough to prevent it from happening again.
To all the people who say "that could never happen to me" work less then 5y in the industry. That can happen to anyone anytime.<p>Remember: You just fix the errors YOU can think of.
Once it happened to me, and now all scripts in last 10 years have<p><pre><code> if (Env.Name -like '*prod*' ) { then throw }
</code></pre>
and similar to all destructive stuff.
> Why? This is something we’re still trying to figure<p>Probably the admin has set the hba config to trust localhost.
Solution - don't use the same db name in prod jsut to be sure
A function called database_model_create() should not drop something.<p>Here it would have failed to create the already existing tables and raised an error.
The maturity of the article is laughable. I'm sure my age is the same as the people who wrote it, but this is unacceptable: dropping databases in prod is a serious issue, not a joke. I think the culture of the company is toxic and not professional at any level. #change-my-mind
Here is how we had our database deletion error, about 15 years ago. Our DBs had been on leased servers at a hosting company in New York City. They were getting out of the datacenter business so we had to move. We were moving to colocated servers at a Seattle datacenter.<p>This was the procedure:<p>1. Restore DB backups in Seattle.<p>2. Set up replication from NYC to Seattle.<p>3. Start changing things to read from Seattle, with writes still going to NYC.<p>4. After everything is reading from Seattle and has been doing so with no problems for a while, change the replication to be two-way between NYC and Seattle.<p>5. Start switching writes to Seattle.<p>6. After both reads and writes are all going to Seattle and it has been that way for a while with no problems, turn off replication.<p>7. Notify me that I can wipe the NYC servers, for which we had root access but not console access. I wasn't in the IT department and wasn't involved in the first 6 steps, but had the most Unix experience and was thought to be the best at doing a thorough server wipe.<p>My server wipe procedure was something like this.<p>8. "DELETE FROM table_name" for each DB table.<p>9. "DROP TABLE table_name" for each DB table.<p>10. Stop the DB server.<p>11. Overwrite all the DB data files with random data.<p>12. Delete all the DB data files.<p>13. Delete everything else of ours.<p>14. Uninstall all packages we installed after the base system install.<p>15. Delete every data file I could find that #14 left behind.<p>16. Write files of random data to fill up all the free space.<p>The problem was with step #6. They declared it done and turned it over to me for step #7 without actually having done the "turn off replication" part of step #6. Step #8 was replicated to Seattle.<p>It took them a while to figure out that data was being deleted and why that was happened.<p>We were split across three office buildings, and the one I was in had not yet had phones installed in all the offices, and mine was one of the ones with a phone. None of the people whose offices did have phones were in, so they lost a few more minutes before realizing that someone would have to run a couple blocks to my office to tell me to stop the wipe.<p>It took about 12 hours or so afterwards for them to restore Seattle from the latest backup, and then replay the logs from between the backup time and the start of the deletes.<p>After that they were overly cautious, taking a long time to let me resume the NYC wipe. They went right up to the point where I told them if we didn't start <i>now</i> we might not finish, and reminded them that those machines had sensitive customer personal information on them and were probably going to end up being auctioned off on eBay by the hosting company. They came to their senses and told me to go ahead.