I think the big use case for ARM in datacenters, over the next few years, is for servers whose CPU usage is very low today--they're consistently network-bound or they just act as a relatively dumb interface to RAM or disk (memcached, some distributed DBs, some dumb proxies). Baidu uses ARM for cloud storage, Facebook used AMD servers for memcached despite their lagging Intel on speed. Basically, you look elsewhere when a Xeon is too much.<p>Someday comes a point where apps that actually are compute-bound might want to use more, slower cores for power/density/cost/etc.--I just don't think that cutover is tomorrow for the kind of apps (most of) you or I work on.<p>Further out: This is a Marvell-designed core that looks slower than the Cortex-A15-based Tegra K1 in a Chromebook (posted results elsewhere in the comments; it could be a clock-speed issue, not anything inherent to the core designs). Further out, there're some 64-bit ARM cores (Cortex-A57, X-Gene, Project Denver though that may not wind up in servers) and at process bumps (like TSMC 20nm). Related, check out <a href="http://www.anandtech.com/show/8580/hp-appliedmicro-and-ti-bring-new-arm-servers-to-retail" rel="nofollow">http://www.anandtech.com/show/8580/hp-appliedmicro-and-ti-br...</a> if you haven't. Of course, Intel isn't sleeping, and low-power x86 chips will improve, too; there will be 14nm versions of the Atom-based Xeons someday. As ever, fun times.