There is a lot of cargo cult about this.<p>Firstly the claim that freebsd has wider testing is utter trash. In terms of TBs installed ZFS on linux >> freenas/freebsd<p>the amount of money behind ZoL is now surprisingly large.<p>Also, extensive memtest of ECC ram is pointless, ECC ram has checksumming built into the chip which will tell you if there is a memory error and correct it if possible.<p>As for most drive fail in the first hours: <a href="https://www.backblaze.com/blog/how-long-do-disk-drives-last/" rel="nofollow">https://www.backblaze.com/blog/how-long-do-disk-drives-last/</a> evidence says other wise. Infant mortality is a thing, but its a matter of months, not hours. (internal QA grabs most of the ones that die within a few hours.)<p>The best way to combat simultaneous failure is to mix hardrive types, this makes it much less likley that a single fault class will be triggered on all disks.
I used to build my own stuff, but now I buy Synology. It's really convenient, has great UI and support, a bunch of different packages you can install, etc.<p>Although I ran into a problem a couple of months after setting it up where the NAS became so slow, it was unusable. I saw that my IOWait was 100%, so I figured it was a disk, but nothing indicated a problem. Eventually I was able to get to a log that showed that one of the drives had some weird error messages, so I bought a new drive from Amazon that came the next day, pulled the drive, and replaced it and instantly everything was okay again.<p>I would have expected something on Synology's status programs to show a problem, but they were all green, so that was annoying.
And the word 'watt' is no where in the article. I'd be curious to the yearly power costs.<p>I've been the eying the Lenovo Thinkserver TS-140: <a href="http://amzn.com/B00FE2G79C" rel="nofollow">http://amzn.com/B00FE2G79C</a> with a Xeon E3-1225v3.<p>Some comments state that it has an idle draw of less then 40 watts. Which is hard to believe. My Dual core intel atom box idles at about 50 watts (of course there is no difference between idle and full load draw with the atom, just super slow either way...)
There's a very active subreddit[0] that discusses a lot of stuff like this. Worth checking out if you've considered having a server in your home.<p>[0] <a href="https://reddit.com/r/homelab" rel="nofollow">https://reddit.com/r/homelab</a>
I'm regretting building my NAS. It's expensive, the storage is small (16TB of drives gave me ~7.9TB of usable space) and any question or doubt about your configuration prompts responses like: <a href="http://blog.ociru.net/2013/04/05/zfs-ssd-usage#comment-1722341810" rel="nofollow">http://blog.ociru.net/2013/04/05/zfs-ssd-usage#comment-17223...</a>
If you are into this type of thing, check out the storage form on [H]ardForum:
<a href="http://hardforum.com/forumdisplay.php?f=29" rel="nofollow">http://hardforum.com/forumdisplay.php?f=29</a><p>Specifically, the showoff thread:
<a href="http://hardforum.com/showthread.php?t=1847026" rel="nofollow">http://hardforum.com/showthread.php?t=1847026</a>
This post covers everything except the cost.
The cost will vary from region to region and country to country, but it would have been nice to get a ballpark figure for what this cost.
The author probably bought all of those hard drives at once, from the same vendor. They're very likely in the same batch.<p>What if something goes bad with a drive? Well, ZFS to the rescue. Maybe even two.<p>What if the whole batch was bad?<p>I've built my NAS with not quite as much storage but more drives using drives from different vendors and different batches from different manufacturers.
I personally think SilverStone's DS380 would make for a better case for something like this. 8x hot-swap 3.5in bays + 4x 2.5in bays in M-ITX form factor.<p><a href="http://www.silverstonetek.com/product.php?pid=452" rel="nofollow">http://www.silverstonetek.com/product.php?pid=452</a><p>It's what I'm using right now for my server and I love it. I have it filled up with 6 drives and haven't had any issues with heat so far. Can't say the same about the ASUS P9A-I motherboard I'm using it with though...
$2,870.15 from Amazon right now which isn't as bad as I expected. That is a great build you've put together. After losing a 3TB drive of thankfully replaceable data I have been eyeing a similar setup but not as intense.<p><a href="https://amzn.com/w/MHNNS9EDAORX" rel="nofollow">https://amzn.com/w/MHNNS9EDAORX</a><p>Side note, I would love to have a list or something on Amazon because the wish list isn't right. A purchase list perhaps? It doesn't include the quantity by default when clicking Add to Cart. I had thought about adding in the Amazon Associates code but I've never actually had that make any money.
Looking forward to FreeNAS 10 when it is available. Thinking about rebuilding my HP N54L microserver currently running Windows Server 2012 R2 with a 'virtual' NAS, Ubuntu + ZFS, running under Hyper-V (yes, this is unnecessarily complicated).<p>It would be great if whatever virtualisation is built into FreeNAS supports the AMD Turion II the N54L uses but support for AMD virtualisation sometimes seems a bit spotty (not supported in SmartOS for example).
I wonder if the author has run into many problems with FreeNAS, or the terrible community. I've seen posts on the forum where the problem was with Samba not authing correctly but the first response is "You don't have enough RAM".<p>At least they're patching the SSH CVE from today, but it's not just a <i>pkg upgrade</i>, it's a tarball that upgrades the whole root drive.
When I built my home NAS, I started with FreeNAS, but it was a pain. The instance of ECC RAM, the lack of USB support[<i>], the community that seemed more focused on office solutions exclusively were killers, and a hermetically sealed distribution were all killers. I switched to Linux based OpenMediaVault[1] and all my problems went away. It uses the same UI as FreeNAS, but defaults to EXT4, supports USB backups, and lets me do anything I want on it. It's great.<p>[</i>] The answer to multiple independent requests about backing up the NAS to a USB enclosure, and met with a refrain of "USB drives are crap, so you're stupid for using them. You should back up your NAS to another NAS, that you never move." Fuck you. I know the limits of my failure model.<p>[1] <a href="http://www.openmediavault.org/" rel="nofollow">http://www.openmediavault.org/</a>
I know that ZFS is cool and LVM isn't, but I literally just finished repairing my LVM-based home NAS, and it left me with a good feeling about LVM. Overall, a stack of md, LVM and XFS is a lot more complicated than ZFS, but each piece is more understandable in isolation.
What I would like is more discussion of choosing FreeBSD vs FreeNAS.<p>The author was inexperienced and so chose FreeNAS for "ease of use". But what, other than a GUI, does FreeNAS really provide? I've never read a detailed explanation. The forums on freenas.org don't seem to address this fundamental question. Everything seems to be predicated on the choice already having been made, nothing helps people make the choice in the first place.<p>Perhaps FreeNAS is more aggressive than FreeBSD about patching storage related bugs?<p>Can anyone point to a detailed discussion about choosing vanilla FreeBSD vs FreeNAS?
I'd rather have a rack-mount, rather than that case. In my opinion, it would be a bit easier to replace faulty hard drives.<p>Otherwise, it seems quite neat!
This is all great info. The only thing I shudder at is that there is one huge single point of failure. I've learned one thing when building huge dumb storage devices, build two and mirror them. I've got 32TB of storage mirrored so if one hits the fan I've got an exact copy.
What I hate about FreeNAS is that it is not permissive of what kind of disks you put in. I bought a Drobo5N just because I was able to slap any drives I wanted, in any size or configuration and it would just work.<p>When FreeNAS can handle that, automatically and on the fly, I'll switch to that.
interesting article. a few more points which may be of interest<p>- in addition to raid it's worth having automated off-site backup. the best solution i could find is duplicity as its encrypted and supports a bunch of backends.<p>- freebsd supports full disk encryption using geli. with some work its possible to make it boot (only) from a usb key, so some protection if the server is stolen. I believe newer versions of Intel Atom support hardware AES acceleration, so this isn't a large overhead.<p>- if the memory requirements of ZFS are too large (which to be honest for a soho application they are!), then you can use UFS together with freebsd software raid1 (gmirror)
But why?<p>I'll assume the media mentioned in the article that's stored is illegal. From the cost of that home server you could very likely legally watch everything and actually support financing the creation of new stuff. Even if that's not true, how many of the movies you watched do you watch more than once? And 24 TB? How do you find time to watch that much stuff?
The OP bought 6x6TB drives. I truly hope that they didn't configure it to be a 36TB zpool. That should be RAIDZ2 at the very least. Heck, I have 4x2TB drives and I am running RAIDZ2. a 2TB drive took 24 hours to rebuild the data on the replacement disk. I would probably be linear, so 72 hours to rebuild a 6TB drive, during which time the other drives are doing tons of reads.