In case it helps, the larger context of this story is that IBM has spent a couple billion dollars developing a new server CPU (POWER8) that is just about to come on the market: <a href="http://www.forbes.com/sites/alexkonrad/2014/04/23/ibm-debuts-new-power-servers-and-new-open-platform-partnership-with-google/" rel="nofollow">http://www.forbes.com/sites/alexkonrad/2014/04/23/ibm-debuts...</a><p>They've also formed a consortium to promote this processor, of which Google is a flagship member (<a href="http://openpowerfoundation.org/" rel="nofollow">http://openpowerfoundation.org/</a>). The expectation (or hope, or fear, depending on your point of view) is that Google may be designing their future server infrastructure around this chip. This motherboard is some of the first concrete evidence of this.<p>The chip is exciting to a lot of people not just because it offer competition to Intel, but because it's the first potentially strong competitor to x86/x64 to appear in the server market for quite a while. By the specs, it's really quite a powerhouse: <a href="http://www.extremetech.com/computing/181102-ibm-power8-openpower-x86-server-monopoly" rel="nofollow">http://www.extremetech.com/computing/181102-ibm-power8-openp...</a>
So Presumably, Google will manufacture their own POWER8 CPU. But Who made them? TSMC? GloFo? Not IBM since IBM will be exiting Fab business in the near future.<p>I am going to guess this Dual CPU variant will be aiming at Intel Xeon E5 v2 Series. The 10 - 12 Core version cost from anywhere between $1200 - $2600. Although Google do get huge discount for buying directly from Intel and their volume.<p>Assuming the cost to made each 12 Core POWER8 to be $200, that is a potentially cost saving of $1000 per CPU, and $2000 per Server.<p>The last estimate were around 1 - 1.5 Million Servers at google in 2012 and 2M+ in 2013. May be they are approaching 3M in 2014/15. Even with most of those are low power CPU for storage or other needs. One million CPU made themselves could be savings of up to a billion.<p>Could this, kick start the server and Enterprise Industry to buy POWER8 CPU at much cheaper price? And Once there are enough momentum and software optimization ( JVM ) it could filter down to Web Hosting industry as well.<p>In the best case scenario, this means big trouble for Intel.
I wonder if POWER8 based servers will be available for the mass market? I'm not sure whether Google is interested in commoditizing POWER8 servers or just participates in the OpenPOWER foundation to ensure that POWER-based servers will suit their needs. The fact that Google is open about their new motherboard hints at the former, but it's not much.<p>I wonder how non-Google-scale developer could even potentially get to use POWER-based servers. Will they be available from the regular dedicated server hosting companies? What OS could they run? RHEL does support POWER platform, but for a hefty price: <a href="https://www.redhat.com/apps/store/server/" rel="nofollow">https://www.redhat.com/apps/store/server/</a> CentOS doesn't, presumably because all the POWER hardware CentOS developers could get is either very expensive or esoteric. That likely means I don't have to consider using POWER-based servers for at least 3 years, right?
Can someone explain the benefits of POWER8 as compared to Intel? I though the volume of POWER8 chips being low (as compared to the exceedingly powerful Intel and Arm chips) would mean that innovation in that area would be low as well.
Large photo of this motherboard: <a href="https://www.flickr.com/photos/ibmevents/14051347355/sizes/o/" rel="nofollow">https://www.flickr.com/photos/ibmevents/14051347355/sizes/o/</a><p>They've masked all the chips with something black. Are they hiding chips they are using, or is this something for thermal dissipation?
Two things. First, slightly off topic: is there anyway this could be a negotiating position with Intel, on price?<p>Second: while many CPU cores (with enough IO) is great for large Borg map reduce jobs, I am curious to see if Google will develop/use better software technology for running general purpose jobs more efficiently on many cores. Properly written Java and Haskell (which I think Google uses a bit in house) help, but the area seems ripe for improvement.
Funny layout. I would like to know why the PCI slots a spread out like that.<p>I know Google don't have a standard rack setup, but still, it would make seens to have all the expantion ports the end of the board... No?
250W TDP in a package that size.. as the article correctly states, it's about how many FLOPs you can get inside a rackmount case. that TDP alone is going to mean that you wont be able to put that many in a single case.<p>a dual socket board, 500W on CPUs, 600W with everything else.. the power supply would have to be something special, but the biggest challenge there would be getting the energy (ala heat) back out of the box..<p>GPUs have similar TDPs and issues - that's why the HSFs on top of them are so massive (and hence GPUs have a bit of an advantage here - they have the entire PCIE board to fit their cooling hardware on)<p>finally, 4.5ghz? what the hell? in one clock cycle, a beam of light wouldn't even get half way across the board (EDIT: not chip). branch/cache/TLB misses may literally kill any reasonable performance you might hope to get out of it. intel get around this by having years of market leading research in branch predictors, caching models, etc. and it's going to be no mean feat to match that.<p>i know IBM aren't exactly new to this game. but AFAIK x86 has always been faster, clock for clock, than POWER.<p>that said, i hope my concerns are misplaced. i'm hoping intel get some competition in the server room. it will be of benefit to everyone.
Between people shifting from pc to arm-powered phones and major data-center users doing their best to cut costs this is shaping up to be a tough decade for Intel.
So they're saying it's easier to use a brand new incompatible little endian Linux personality, with associated new toolchains and new ports of low level stuff etc compared to the standard Linux PPC64 stuff...<p>Sounds kind of surprising even if IBM did some of the bringup work ahead of time, but maybe they've got little endian assumptions baked in many internal protocols/apps.
Would these be too pricey as hypervisors for cloud compute? It seems to me to be ideal for CPU thread intensive applications like databases, on-demand transcoding.<p>What are some use cases for a server like this for Google? I'd love to see these available in the IBM Cloud (SoftLayer) but I think they will be too pricey and reserved for enterprise.
I think its interesting that they didn't include the "traditional" mouse/keyboard/VGA ports. Not particularly surprised since this is a server motherboard, but still interesting. I think I do see an HDMI connector in the lower right next to a tall silver port (possible USB connector).