Performance per watt depends on the computer <i>and the workload.</i> It's not apparent from the SPECint benchmarks they show in the article, but chips like Atom are better than beefy server chips for some server workloads, and worse for others, when measuring performance per watt. Here's a paper from Berkeley's RAD lab which tries two Atom processors and a Xeon on several different server workloads, and compares their performance per watt:<p><a href="http://www.sigops.org/sosp/sosp09/papers/hotpower_10_chun.pdf" rel="nofollow">http://www.sigops.org/sosp/sosp09/papers/hotpower_10_chun.pd...</a><p>The tl;dr version is that what you really want are <i>hybrid</i> datacenters, where you can assign various workloads to different types of machines, and use each machine type for what it's best suited to do.
I worked with one of these prototype boxes. I was using it for something a bit outside their common use case: clustered ETL processing of log data. I was quite happy with the performance. In the workloads I had that needed lots of threads, I was able to use the box to spin up a <i>lot</i> of nodes and crunch through several hundred GB of log data very quickly. The machines were easy to work with since they felt like normal Linux nodes, and the interconnect fabric made inter-process communication very snappy.
James Hamilton, VP at Amazon and author of the popular blog "Perspectives" has some interesting insights into and great things to say about the work SeaMicro and other new startups are doing to revolutionize the server industry.<p><a href="http://perspectives.mvdirona.com/2010/06/14/SeaMicroReleasesInnovativeIntelAtomServer.aspx" rel="nofollow">http://perspectives.mvdirona.com/2010/06/14/SeaMicroReleases...</a>
"People who are really serious about software should make their own hardware." - Alan Kay<p>This news is yet another data point that developers will need to hack concurrency sooner than later, as a core skill in one's professional repertoire. Off to learn Stackless PyPy, Clojure, Scala, etc...
Their tech overview PDF has less layman fluff than the article: <a href="http://dev.seamicro.com/sites/default/files/SeaMicroTechOverview.pdf" rel="nofollow">http://dev.seamicro.com/sites/default/files/SeaMicroTechOver...</a>
Interesting in terms of technology, but of no interest to me as someone who does colocation and web servers for my clients, especially since they almost all use traditional RDBMSes like MySQL and Postgres.<p>The Atom is too underpowered and too RAM limited for individual systems - you would do better in most cases with a 2x quadcore setup and 32-64GB RAM combined with OpenVZ or Solaris Zones. Lack of ECC = automatic disqualification for me as well.<p>For a company that is doing a lot of web serving a la Facebook or eBay I can definitely see the appeal. In such larger cases, power usage dwarfs many other considerations.
Interesting... I've never been in a really big datacenter, so I'd like to see some (hopefully non-biased) reviews from somebody that does work in those places.<p>Would this really work well for the intended market? There are lots of startups over here that plan on massively serving webpages - would something like this (only cheaper :) ) make you reconsider using whatever cloud services you're currently using?<p>Articles I found on Google:<p><a href="http://gigaom.com/2010/06/13/seamicros-low-power-server-finally-launches/" rel="nofollow">http://gigaom.com/2010/06/13/seamicros-low-power-server-fina...</a><p>and the Wall Street Journal's take:<p><a href="http://blogs.wsj.com/digits/2010/06/14/seamicro-tries-to-rethink-the-internet-server/" rel="nofollow">http://blogs.wsj.com/digits/2010/06/14/seamicro-tries-to-ret...</a>
I do see a potential problem here. In the pictures, they show a bunch of Atom CPUs soldered directly to the board. That means dire things for service. Now, if a single CPU has a a flaw, you need to replace an entire board of CPUs.<p>Compare this to a standard blade setup, where you could just swap out CPUs, or even an IBM System Z where you could hotswap the CPU, and service doesn't look so great.
The way I read this, they are achieving savings by virtually mux-ing (or de-muxing depending on viewpoint) much of everything that's not the CPU. Is this optimized to make the support of virtual servers with relatively low throughput more efficient?
This seems quite similar to the FAWN project at CMU. <a href="http://www.cs.cmu.edu/~fawnproj/" rel="nofollow">http://www.cs.cmu.edu/~fawnproj/</a> The idea is similar: if IO is the bottleneck, instead of scaling up IO, scale down the CPU power.
It would have been better to see a comparison with SGI (ex-Rackable) CloudRack systems, which have a bit of an inbetween approach, using Xeons, but at least nominally seem to pack in more cores in the same enclosure size. One of their power tricks, in addition to pulling back DC converters further from the computers, is to allow things to run hot, resulting in savings on cooling costs.
Great news! Hardware innovation typically means new software opportunities. If this is turns out to be a generally accepted workhorse server design and not just a hotrod box I wonder who will be the first in here to develop a profitable software product for it.<p>Dell products have features and services that make them enterprise friendly, they are more than just hardware to the customer. So trying to compete with Dell head on might not be the company's best strategy at first. Perhaps following the strategy EMC used with CLARiiON to sell through DELL would be more of a money maker for the company.
<a href="http://en.community.dell.com/dell-blogs/b/direct2dell/archive/2009/05/19/dell-launches-quot-fortuna-quot-via-nano-based-server-for-hyperscale-customers.aspx" rel="nofollow">http://en.community.dell.com/dell-blogs/b/direct2dell/archiv...</a><p>Similar concept from Dell, from over a year ago. Although Fortuna seems bit more conventional, I'm not sure if it's a bad thing.
Anyone care to suggest some ideas on a few things:<p>a) who would likely buy these (corporates, SMEs, startups)?<p>b) it seems that they have increased the risk of single point of failure (e.g. 1 PSU taking down 128 nodes) what's the mitigation strategy?<p>c) what would an architecture on a box like this look like? Should I just be thinking of it as a cheap set of VPS nodes?<p>d) People keep mentioning the kind of processing these chips are good for and not so good for. Can someone be explicit about good real world uses and bad ones?
I would think that ARM chips would use less power than Atom; but compatibility with existing software is a big selling point (as always). But this would provide a pressure towards ARM servers, and therefore ARM-compatible software.<p>OTOH, I get the impression that the bulk of the power savings are not from the CPU at all, but from virtualizing the other components. Therefore, the pressure towards ARM is much less.<p>And I suppose Intel wouldn't be supportive unless there was a compelling long-term reason to choose Atom.
Does this really solve a problem anyone has though?<p>It seems we have an oversupply of CPU on modern boxes and an under supply of I/O speed (or space with SSDs) and memory.
With that many processors crammed into such a small case isn't heat dissipation a problem? Or are the Atom chips used not as power hungry as a standard commercial cpu?
There are no information about the chipset, memory specs and extension slots in particular.<p>It could be a huge bottleneck between CPUs and RAM, because of concurrent access of so many CPUs to the memory while Atoms have a very small caches.<p>It means that real-world applications, like a multi-threaded services (especially JVM-based, or simply MySQL) could not be used efficiently.<p>I'm also very skeptical about hundreds of KVMs which is a the only working virtualization technology I know. =)
An ARM-powered variant would sit on top of this Atom-powered machine, with quite possibly the same or higher numbers of efficiency when looking at performance-per-watt.<p>The article quotes the interviewee saying that ARMs can be used, but not clear enough to determine if they actually have ARM versions of these machines, which I'd be very interested in seeing 100k specint_rate figures for.
Big surprise, Dell, Hewlett-Packard and IBM not innovating? I can't believe it...<p>Just look at the computers they are selling. most still offers 512M of RAM as a default. What can you run on 512? Nothing like having to upgrade the day you receive your product. Consumers by and large need the guidance of manufacturers to make the right HW choices and the manufacturers just want the cash. They will suck each segment dry until the market forces them to make the changes. That's progress?<p>With all the advances in multi-monitor add-on SW or the fact that Windows and Mac OS have been supporting multi-monitor control for years, try finding hardware already fitted with multiple display connections. In the end, the user is forced to customize their own equipment. And yes, I know most HN readers do this, but I am talking about the general public.<p>Great to see the little guys are still fighting. The big ones really don't give a damn.