> I have been authoring the various Drive Stats reports for the past ten years and this will be my last one. I am retiring, or perhaps in Drive Stats vernacular, it would be “migrating.”<p>Thank you for all these reports over the years.
It's not a best practice, but the last 10 years I've run my home server with a smaller faster drive for the OS and a single larger disk for bulk storage that I choose using Backblaze Drive Stats. None of have failed yet (fingers-crossed). I really trust their methodology and it's an extremely valuable resource for me as a consumer.<p>My most recent drive is a WDC WUH722222ALE6L4 22TiB, and looking at the stats (albeit only a few months of data), and overall trend of WDC, in this report gives me peace of mind that it should be fine for the next few years until it's time for the cycle to repeat.
I've owned 17 Seagate ST12000NM001G (12TB SATA) drives over the last 24mos in a big raidz3 pool. My personal stats, grouping by the first 3-4 SN characters:
- 5/8 ZLW2s failed
- 1/4 ZL2s
- 1/2 ZS80
- 0/2 ZTN
- 0/1 ZLW0
All drives were refurbs. Two from the Seagate eBay store, all others from ServerPartDeals. 7/15 of the drives I purchases from ServerPartDeals have failed, at least four of those failures have been within 6 weeks of installation.<p>I originally used the Backblaze when selecting the drive I'd build my storage pool around. Every time the updated stats pop up in my inbox, I check out the table and double-check that my drives are in fact the 001Gs.. the drives that Backblaze reports has having 0.99% AFR.. I guess the lesson is that YMMV.
I used to think these were interesting and used them to inform my next HDD purchase. I realized I only used them to pick a recently reliable brand, we're down to three, and the stats are mostly old models, so the main use is if you're buying a used drive from the same batch that Backblaze happens to have also used.<p>Buy two from different vendors and RAID or do regular off-site backups.
Hard to argue with those WDC/Toshiba numbers. Seagate's are just embarrassing in contrast.<p>(HGST drives -- now WDC -- were great, but those are legacy drives. It's been part of WD for some time. The new models are WDC branded.)
When I started my current 24-bay NAS more than 10 years ago, I specifically looked at the Backblaze drive stats (which were a new thing at that time) to determine which drives to buy (I chose 4TB 7200rpm HGST drives).<p>My Louwrentius stats are: zero drive failures over 10+ years.<p>Meanwhile, the author (Andy Klein) of Backblaze Drive Stats mentions he is retiring, I wish him well and thanks!<p>PS. The data on my 24-drive NAS would fit on two modern 32TB drives. Crazy.
Blackblaze is one of the most respected services in Storage industry, they've kept gaining my respect even after I launched my own cloud storage solution.
After couple failed hard disks in my old NVR, I’ve come to realize heat is the biggest enemy of hard disks. The NVR had to provide power to the POE cameras, ran video transcoding, and constantly writing to the disk. It generated a lot of heat. The disks were probably warped due to the heat and the disk heads crashed onto the surface, causing data loss.<p>For my new NVR, the POE power supply is separated out to a powered switch, the newer CPU can do hardware video encoding, and I used SSD for first stage writing and hard disks as secondary backup. The heat has gone way down. So far things have run well. I know constant rewriting on SSD is bad, but the MTBF of SSD indicates it will be a number of years before failing. It’s an acceptable risk.
Every year, this seems like great brand promotion for Backblaze, to technical prospective customers, and a nice service to the field.<p>What are some other examples from other companies of this, besides open source code?
Great to see this every year.<p>Although a minor pet peeve (knowing this is free): I would have loved to see a 'in-use meter' in addition to just 'the drive was kept powered on'. AFR doesn't make sense for a HDD unless we know how long and how frequently the drives were being used (# of reads/writes or bytes/s).<p>If all of them had a 99% usage through the entire year - then sure (really?).
I had five Seagates fail in my Synology NAS in less than a year. Somebody suggested it was a "bad" firmware on that model, but I switched to WD and haven't had a single failure since.
It continues to surprise me why Backblaze still trades at a fraction of its peak COVID share price. A well-managed company with solid fundamentals, strong IP and growing.
Google sells 2TB of space on Google drive for $10/month. I'm looking to move my data elsewhere.<p>Can anyone recommend a European based alternative with a roughly similar cost?
I bought a bunch of <i>28</i> TB Seagate Exos drives refurbished for not that much money. I still can't believe that 28TB drives are even possible.
Polite data viz recommendation: don't use black gridlines in your tables. Make them a light gray. The gridlines do provide information (the organization of the data), but the more important information is the values. I'd also right align the drive failures so you can scan/compare consistently.
My home NAS drives are currently hitting the 5 years mark. So far I'm at no failures, but I'm considering if it's time to upgrade/replace. What I have is 5 x 4TB pre-SMR WD Reds (which are now called the WD Red Pro line I guess). Capacity wise I've got them setup in a RAID 6, which gives me 12TB of usable capacity, of which I currently use about 7.5TB.<p>I'm basically mulling between going as-is to SSDs in a similar 5x4TB configuration, or just going for 20TB hard drives in a RAID 1 configuration and a pair of 4TB SATA SSDs in a RAID 1 for use cases that need better-than-HDD performance.<p>These figures indicate Seagate is improving in reliability, which might be worth considering this time given WD's actions in the time since my last purchase, but on the other hand I'd basically sworn off Seagate after a wave of drives in the mid-2010s with a near 100% failure rate within 5 years.
True enterprise drives ftw - even Seagate usually makes some very reliable ones. They also tend to be a little faster. Some people have complained about noise but I have never noticed.<p>They are noticeable much heavier in hand (and supposedly most use dual bearings).<p>Combined with selecting based on Backblazes statistics I have had no HDD failures in years
Based on the data, it seems they have 4.4 petabytes of storage under management. Neat.<p><a href="https://docs.google.com/spreadsheets/d/1E4MS84SbSwWILVPAgeIipltAo6BM3yJeR1w6qRyZj7w/edit?gid=0#gid=0" rel="nofollow">https://docs.google.com/spreadsheets/d/1E4MS84SbSwWILVPAgeIi...</a>
I wish there was a way to underspin (RPM) some of these drives to lower noise for non-datacenter use - the quest for the Largest "Quiet" drive - is a hard one. It would be cool if these could downshift into <5000RPM mode and run much quieter.
Related - about a year ago or so, I read about a firmware related problem with some vendors SSDs. It was triggered by some uptime counter reaching (overflowing?) some threshold and the SSD just bricked itself. It’s interesting because you could carefully spread out disks from the same batch across many different servers, but if you deployed & started up all these new servers around the same time, the buggy disks in them later <i>all</i> failed around the same time too, when their time was up…
Considering the bathtub curve, does this table mark a drive as bad if it fails in the first (e.g.) week?<p><a href="https://en.wikipedia.org/wiki/Bathtub_curve" rel="nofollow">https://en.wikipedia.org/wiki/Bathtub_curve</a>
It’s a bit odd. HGST always fares very well, in Backblaze stats, but I have actually had issues, over the years, in my own setup (Synology frames). Seagate has usually fared better.<p>Might be the model of the drives. I use 4TB ones.