I once started work somewhere where they did software releases by RAID.<p>Their systems involved shipping a server (effectively an appliance) to the customer with all of the working components on it. However, there was no build or deployment process for these components - so the only way to create a new server was to take an existing one and create a copy.<p>This was done by opening up a working server running with RAID 1, removing one of the disks and installing the disk into a new server. Let the RAID recover the data onto the other blank disk then remove it and put the other blank disk in and let it rebuild.... result, a copied server!
<i>Dud, Flood, & Bud.</i><p><i>Duds</i> are hardware that goes bad, like a disk drive, network adapter, NAS, or server. There are an infinite number of ways and combinations things can break in a moderate sized IT shop. How much money / effort are you willing to spend to make sure your weekend isn't ruined by a failed drive?<p><i>Floods</i> are catastrophic events, not limited to acts of God. Your datacenter goes bankrupt and drops offline, not letting you access your servers. Fire sprinklers go off in your server room. Do you have a recent copy of your data somewhere else?<p><i>Bud</i> is an accident-prone user. He accidentally deleted some files... the accounting files... three weeks ago. Or he downloaded a virus which has slowly been corrupting files on the fileserver. Or Bud's a sysadmin who ran a script meant for the dev server on the production database. How can we get that data back in place quickly before the yelling and firing begins?<p>There are more possible scenarios (hackers, thieves, auditors, the FBI), but if you're thinking about Dud, Flood, & Bud, you're in better shape than most people are.
We live in a sad world where most companies don't have a real disaster recovery plan. Many times in my career I've had customers ask me to save them because they "thought" they were backing up but when they went to restore from the {tape|floppy|backup disk} media they found it to be corrupt.<p>Backup and Disaster recovery strategies seem really easy until you think through all the failure modes and realize the old axiom "You don't know what you don't know" is there to make your life full of pain and suffering.<p>Years ago my customers would literally restore their entire environments onto new metal to verify they had a working disaster recovery plan. Today most clients think having a "cloud backup" is awesome.. Until they realize in the moment of disaster that they are missing little things like license keys for software, network settings, passwords to local admin on windows boxes etc.
<i>The community has discussed the idea of adding a feature to specify a minimum streaming replication delay</i><p>This is a feature of Oracle, the redo logs are replicated to the standbys as normal, so you have an up to date copy of them on the standby, but only applied after an x hour delay. You can roll the standby forward to any intervening point in time and open it read-only to copy data out.<p>Less need of it these days with Flashback, of course, but it saved a lot of bacon.
Most companies I've worked for have had some kind of annual fire drill / alarm testing. They announce it the prior week, and then, say, Tuesday at 10am the alarm goes off, everyone files out of the building into the parking lot for 5 minutes, then back inside. In 15+ years (at several different companies), only once has there been an actual fire department call where the evacuation was "real" (even then, there was no actual fire).<p>In those same 15+ years, mostly working for startups, there have been numerous drive failures. Unfortunately, failure (a) to verify backups <i>before</i> there's a failure, and (b) to practice restoring from backups has often meant that a drive failure means loss of several days' worth of work. In one instance, the VCS admin corrupted the entire repo, there were no backups, that admin was shown the door, and we had to restart from "commit 0" with code pieced together from engineers' individual workstations. <i>That</i> was when I got religious about making & <i>testing</i> backups for my work and the systems I was responsible for...
You must test your backups. I used a commercial backup service that sent daily status emails. It seemed great for months until I realized it had a bug and there was nothing in the archive.
Cloud backup services have taken away any possible excuse for not remotely backing up any non-ginormous collection of data. It's push-button easy and a lot cheaper and easier than dealing with taking tape backups and moving them offsite.<p>Not to say that it's the best solution for everyone, but simply that it leaves people no excuse for doing <i>nothing</i>.
The underlying cognitive bias in "I don't need backups, I use raid1" seems to be the quite common one of "I don't do anything stupid, so I don't need anti-stupidity devices" (feel free to substitute "careless" or similar for "stupid"), maybe with a side-order of "if I set up systems that protect me from my stupidity then only stupid people will want to work with me". The fact is, most of us do many stupid things every day--some stupid at the time, some stupid in retrospect--and systems that don't let us recover from them are poor systems.
I've worked with tapes offsite before hard drives became cheap enough to use for backup (of the appropriate amount of data of course).<p>My current setup goes as follows:<p>Servers in colocation get backup daily to a server in the office. That server in the office then gets backed up daily to a iosafe.com fire and water proof hard drive in the office which when I get a chance will be bolted to the desk for further security. Clones are then made of that server biweekly (which are bootable) and one is kept in the office and one is taken offsite.<p>So the office server is the offsite for the colo server and the clone of that is the backup for the office.<p>The clones allow you to test the backup (hook it up and it boots basically).<p>Added: Geographically the office is about 3 miles from where the backup of the office is kept. But the office is about 40 miles from where the colo servers are kept.
Fun anecdote: years ago, I worked for a department that had its server on a RAID setup, and when I asked about backups they said, "Don't worry". One day, a drive failed. They replaced it and started restoring from the other drive - which failed mid-sync. The two drives were from the same production lot and died literally within 12 hours of each other.<p>So: back up your data.
This is one of the problems I have with SQL Azure. They have yet to implement a satisfactory backup option:
<a href="http://www.mygreatwindowsazureidea.com/forums/34685-sql-azure-feature-voting/suggestions/655599-enable-backups" rel="nofollow">http://www.mygreatwindowsazureidea.com/forums/34685-sql-azur...</a>
It's amazing to me that anyone is actually arguing that RAID negates the need for backup. That is just dumb.<p>If I ever heard an SA working for me advocate that position, I would probably get them off of my team ASAP.
Maybe I'm an idiot, but the vast majority of times I've needed to recover something from a backup are due to user error, not hardware failure. RAID sure doesn't help there.
for most of my stuff dropbox pro (with packrat addon for unlimited file histories) + github handle all my backup needs. Of course this wouldnt work for all scenarios but i dont work with/have loads of huge files.