In the past world of Sparc, Risc, x486, x86-64, Itanium, etc ... the JVM seemed like a great solution to the problem of recompiling code for different CPU targets. But in today's world of cloud based commodity hardware can't we just optimize for x86-64 (C/C++) and be done with it? "Write once, deploy everywhere" has come full circle.
What about ARM and POWER? Google is deploying Power in its data center and I thin Nvidia is working on POWER CPUs for datacenters/supercomputers (they are both members of the recently formed "OpenPOWER Alliance").<p>As for ARM, I know there's a big reluctance to use ARM for the big names, because "meh, x86 is almost as good and we already have all the tools for it". But I think ARM is going to grow in a grassroots kind of way, from the very low-end (Raspberry Pi) and up. It will happen slowly, but a decade from now I think ARM will have a decent market share in the server market. x86 didn't kill the previous architecture in the enterprise overnight either - it took 2-3 decades to displace most of them.
x86-64 isn't the only platform that matters today (as higherpurpose notes, ARM and POWER are significant in various spheres), and its <i>relative</i> dominance in the mainstream of the market isn't guaranteed to be enduring.<p>The kind of native-platform independence that the JVM offers may be <i>less</i> important right now than it was when Java was introduced, but it hasn't completely stopped having a point.