<i>Wear patterns and flash are an issue, although rotational drives fail too. There are several answers. When a flash drive fails, you can still read the data. A clustered database and multiple copies of the data, you gain reliability – a server level of RAID. As drives fail, you replace them.</i><p>Unlike magnetic disks, SSDs have a tendency to fail at a really predictable rate. So predictably that if you've got two drives of the same model, put them into commission at the same time, and subject them to the same usage patterns, they will probably fail at about the same time. That's a real problem if you're using SSDs in a RAID array, since RAID's increased reliability relies on the assumption that it's very unlikely for two drives to fail at about the same time.<p>With an SSD, though, once one drive goes there's a decent (perhaps small, but far from negligible) chance that a second drive will go out before you've had a chance to replace the first one. Which makes things complicated, but is much better than the similarly likely scenario that a second SSD fails shortly after you replace the first one. Because then it's possibly happening during the rebuild, and if that happens then it really will bring down the whole RAID array.<p>That said, if you're careful then that predictability should be a good thing. A good SSD will keep track of wear for you. So all you've got to do is monitor the status of the drives, and replace them before they get too close to their rated lifespan. If you add that extra step you're probably actually improving your RAID's reliability. But if you treat your RAID as if SSDs are just fast HDDs, you're asking for trouble.