This is pretty awe-inspiring but as a programmer I know it would be fairly difficult to use this machine for existing workloads because so much code would have to be rewritten from typical x86 code to CUDA/OpenCL to use all those GPUs.<p>Personally, I'm more excited for the next wave of supercomputers built with racks of Xeon Phis [1].<p>[1] - <a href="http://www.intel.com/content/www/us/en/high-performance-computing/xeon-phi-for-researchers-infographic.html.html" rel="nofollow">http://www.intel.com/content/www/us/en/high-performance-comp...</a>
The hexadecimal numbers in the design on the front panels of the racks appear to say in part:<p><pre><code> ...Computing Oak Ridge National Laboratory Le...
</code></pre>
(not too surprising, I suppose ;)
Does anyone know why they have a separate disk IO system when they could more easily just plug drives into each node/motherboard for higher aggregate throughout, less complexity, and a lower overall cost?<p>EDIT: Blade systems or no, the drives have to physically be placed somewhere. Having a separate subsystem can only take up more space, not less. Two reasons I can think of: (1) independent scaling of compute and storage, and (2) lack of software for a distributed filesystem. Most likely (2) plus inertia is the real reason, all the others seem like rationalizations. For example, they are either able to take nodes offline or they aren't. The need exists whether or not the disks are attached there.
The full page (print version) version of the article is at <a href="http://www.anandtech.com/print/6421" rel="nofollow">http://www.anandtech.com/print/6421</a>