Before the thread becomes cluttered with people suggesting alternatives or questioning why you wouldn't just run <insert manufacturer, OS, etc>, the person that did this replied the following:<p><i>simbimbo says:
December 9, 2012 at 11:03 am
Thanks for the great write up Hack A Day. I would like to answer some of the questions posted. @Geebles these machines all run SSD’s and I ordered them with AppleCare, so I hope to never have to change a drive ;-)<p>As for the reason I built this.. Well, I guess I’m just like a challenge ;-), but seriously, the company I work for has a need to have large numbers of machines to build and test the software we make.<p>There were plenty of discussions of Virtual environments and other “Bare Motherboard”/Google Datacenter-type solutions, but the fact is, the Apple EULA requires that Mac OS X run on Apple Hardware, since we are a software company we adhere to these rules without exception. These Mac Machines all run OS X in a NetBooted environment. We require Mac OS X because the products we make support Windows, Linux and Mac so we have data centers with thousands of machines configured with all 3 OS’s running constant build and test operations 24 hours a day 365 days a year.<p>As for device failure, we treat these machines like pixels in a very large display, if a few fail, it’s ok, the management software disables them until we can switch them out. This approach allows us to continue our operations regardless of machine failures.<p>@bitbass I tried the vertical approach, but manufacturing the required plenum to keep the air clean to the rear machines cost too much for this project, but it’s not off the table for the next rack<p>@Kris Lee When I open the door I can literally watch the machine temps go up, but I can keep it open for 15-20 minutes before the core temps reach 180F<p>@Adam Ahhh.. Nope, you can’t have my job ;-)</i>
We did something similar at Facebook for iOS and OSX automated testing and a few of them doing iOS app builds.<p>Here is a post that Jay Parikh (VP of Infrastructure) made about it.
<a href="http://tinyurl.com/cnvss4v" rel="nofollow">http://tinyurl.com/cnvss4v</a><p>Our density isn't as high (we have 64 minis) because of cooling and cabling that we designed according to our datacenter cooling standards.<p>@jurre - If you want to chat about our design, message me and I can put you in touch with our hardware designer.
It's very cool, but is anyone actually tied to OS X as a server platform? Couldn't they move to FreeBSD and save a ton of money in an application like this? I'm wondering if there's a real business case for this, or it's just a fun hack.<p>edit: I guess lumped into this is the small market that seems to exist for colocated Mac Minis. Is there something about them that is better than renting commodity x64 hardware?
Mac Minis are horrible server hardware. We've had a couple running as servers. They fail randomly. Their hard drives fail. They don't rack mount easily. The only reason to have them is if you inherit some old ones, don't want to throw them away, and then don't mind replacing and throwing failed units away pretty often.
I'm actually surprised any DC would take that equipment. They, in my experience at least, are very fussy about what what you put in the racks and power draw etc.<p>Oh and we get 640 cores in 20U (8x4 core xeon machines each 1u) and that leaves enough room for a 32Tb SAN, FC switches and a pair of redundant LAN switches.<p>REgarding splitting the power using the hack described, 160 melted minis and a halon cloud coming up.<p>Looks pretty though.
Considering you only really have to pay around the 60 dollar mark for the OS now, I dont think its much of a big deal, I use one of these at home as a mini fileserver/wiki it draws sweet FA makes little to no noise and has HDMI connector direct into my tv. I would happily deploy one for our company marketing team or small scale offices.
I understand the idea of treating them like pixels, so if a fan dies or a NIC card dies, no problem, just stop using that Mini. But what about memory corruption or other issues that are more difficult to detect? Normally server hardware has things like ECC memory to prevent these issues, but in this case a Mini with bad RAM could intermittently corrupt data for some time before it's noticed (if ever).
Interestingly, it looks like the front fans blow _into_ the rack. This means that if the door isn't securely closed it'll blow open - being on hinges and having massive fans attached.<p>It would be better to have the fans on the back and suck air through the rack rather.<p>That said, DC floor space is cheap compared to power and cooling. I'm surprised they didn't lower the density so as not to have a massive fire risk.
Curious - what do we call a computer like this? It's obviously not going to make the TOP500, but is it a "supercomputer"? I thought perhaps "minisupercomputer" might be fitting, but according to Wikipedia that is a term for a class of computers that became obsolete in the early 90s.
I'm a proud owner of a Mini Server (slightly customised - replaced memory and primary disk with SSD) for over a year. I use it as my main workstation and I love it; So small (and relatively cheap including the upgrade), yet so powerful.
Definitely a fun challenge. If you're going to invest in the hardware and custom build. Forget the y cable. Figure out a better solution. Rent 1/2 rack next to it to hold the pdus. +1 on the massive door fans.