Perhaps it's worth pointing out some context. Given the remarkable predecessor, K Computer, this was only a matter of time. (I heard a great early talk on K, and I wish I knew the speaker for credit who was obviously working quite hard in English, but flawless, ending with basically we did it all ourselves largely de nuovo.) It seems that given the current circumstances, they haven't kept to schedule -- it was supposed to be operating <i>next</i> year.<p>There's a lot non-mainstream in this, like K, but partly influenced by K experience. Unusually, it's all apparently specifically designed for the job, from the processor to the operating system (only partly GNU/Linux). Notably, despite the innovation, it should still run anything that can reasonably be built for aarch64 straight off and use the whole node, even if it doesn't run particularly fast; contrast GPU-based systems. (With something like simde, you may even be able to run typical x86-specific code.) However, the amount of memory/core is surprising -- even less than Blue Gene Q -- and I wonder how that works out for large-scale materials science work for which it's obviously prepared. Also note Fujitsu's consideration of reliability, though the oft-quoted theory of failure rates in exascale-ish machines was obviously wrong, otherwise as the Livermore CTO said, he'd be out of a job.<p>The bad news for anyone potentially operating a similar system in a university, for instance, is that the typical nightmare proprietary software is apparently becoming available for it...