Here's a good article summarizing the HP site which does a horrid job explaining what it actually is<p><a href="http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_servers/" rel="nofollow">http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_...</a>
Most are overlooking the change in thinking with these servers.<p>Think of servers with ARM processors in the pipeline which will come with 64+ cores, available for cellphones. The biggest problem in a datacenter is not space or processing power, it's energy consumption and heat dissipation. Walking into a datacenter gives you the feeling that the place looks empty with plenty space to fit 6x more servers. Today this can't be done because there's no capacity for more air conditioning to cool more servers in the building.<p>Also, the way of processing has changed in the last years, for example with map reduce, which makes having many cores way more useful than a single server with a massive 5 ghz core. Actually today, many servers are IO bound, not CPU bound. There's exceeding cpu capacity.<p>Think of having a server, with 64 ARM cores and and array of SSD's. This won't heat up as much as mechanical disks or today's cpus, with very small IO constraints due to SSDs speeds and far more parallel processing power.
From <a href="http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_servers/" rel="nofollow">http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_...</a> :<p><i>The sales pitch for the Redstone systems, says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.</i><p><i>A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m.</i><p>Hmm, let's see. It's about 7-8 grands per Xeon server, something like HP Proliant DL360R07 (2 x 6-core Xeons at 2.66GHz). It's 3 times as many cores as Redstone, clocked at 2.66 times greater frequency each, and doing more instructions per clock tick, too. And that's without hyperthreading.<p>Am I missing something big, or is Redstone solution neither cost-effective nor energy-effective?
I find it strange that they're using Cortex-A9 CPUs. I would have expected anyone going for the server market with ARM cores to use Cortex-A15, which has 40 bit addressing with PAE.
I think this is a highly significant move by ARM. It's amazing when you speak to datacentre people and they tell you how much of your server charges go on electricity and cooling. My recent example was £200 extra/year for an additional Opteron 6128 and £400 extra/year for the increased power usage from that processor!<p>There is an obvious gap in the market for low power, low heat generating, high memory throughput server processors. I'd just like to see a reference Linux distro which supports 16 ARM cores as well as a reference server card...<p>The specs:<p><a href="http://www.calxeda.com/products/energycore/ecx1000/techspecs" rel="nofollow">http://www.calxeda.com/products/energycore/ecx1000/techspecs</a><p>only refer to 32-bit memory addressing as well (ie. <4GB of memory). Seems like the wait will be for the ARMv8 64-bit processors to be integrated.<p>Interesting times!
I wonder if the Redstone part of the name came from a secret Minecraft fan at HP's headquarters.<p>I hope it succeeds, just to give Intel a run for their money. I really think that ARM is the future of computing (including the desktop).
Just yesterday I was dreaming of small server racks composed of Raspberry PI's and BeagleBoards. Wish I had a few million lying around… or a cheap dedicated link at home.