From the article: "We've demonstrated this in our battery life tests already. Samsung's ATIV Smart PC uses an Atom Z2760 and features a 30Wh battery with an 11.6-inch 1366x768 display. Microsoft's Surface RT uses NVIDIA's Tegra 3 powered by a 31Wh battery with a 10.6-inch, 1366x768 display. In our 2013 wireless web browsing battery life test we showed Samsung with a 17% battery life advantage, despite the 3% smaller battery. Our video playback battery life test showed a smaller advantage of 3%." It is worth also noting that they are comparing this years intel with last years ARM. Also worth noting that it is also a smaller CPU process of around 25% in Intels favour.<p>Now nothing stopping Intel doing ARM chips down the line or indeed doing some hybrid chip. They have many cards they can play if they don't get the sales they want.<p>I would love the day when you could get a motherboard and the current Intel and ARM chips are pin compatable so you could drop what you like in, wouldn't that be a wonderful dream.
I don't think this is a particularly fair test of ARM and x86 as architectures, though it is of Tegra 3 and Cloverview as specific systems. Cloverview just came out, but Tegra 3 is nearly 2 years old. Cloverview is on Intel's 32nm process node, whereas Tegra 3 is on TSMC's 40nm node (and remember that Intel's 45nm node actually had better performance than TSMC's 40nm).<p>And I've never quite understood NVidia's design choices with Tegra 3. All the other ARM SOC vendors used a low power version of the process node they were on for their ARM cores, but NVidia used the standard process version for 4 of their cores and the low power version only for the 5th core. And yet those four cores are clocked only slightly faster than the cores of their competitors, while their 5th core is clocked way lower.<p>Now, NVidia's GPUs are top notch and while it hasn't been that successful in recent generations of phones I assume that NVidia's existing Windows software compatibility is why it was selected for the Surface.<p>EDIT: I seem to remember that there were some big issues on the GPU front a couple of years ago with NVidia's design tools and TSMC's 40nm process having something of an impedance mismatch. The big improvements between NVidia's GPU's from that time to their current generation attest to these issues being sorted out.
Does Windows RT offer anything other than ARM compatibility for better size/battery life? Is the whole confusing Windows RT / Windows 8 split something that could've been avoided if MS had known Intel would have these chips ready?
Intel would be wise to brand Clovertrail as something other than Atom (or Celeron). Clovertrail perf and efficiency are very good and the Atom name typically suggest otherwise.<p>If I could get a Surface Pro with a Haswell at the listed Surface Pro prices, it would be a no brainer. But with a 3rd-gen Core i5 -- not so enticing.
I just found that Intel is about to release a server-version of the Atom, codenamed Centerton, based on a 32nm process. These will be 64-bit processors and support Intel's virtualization extensions.<p>Next year there will be a 22nm version of Centerton, Avoton, with a 14nm processor following the year after. Centerton, and possibly Avoton, will be out before the first 64-bit ARM chips.<p><a href="http://www.anandtech.com/show/6509/intel-launches-centerton-atom-s1200-family-first-atom-for-servers" rel="nofollow">http://www.anandtech.com/show/6509/intel-launches-centerton-...</a><p>It looks like Intel is taking the Atom's intended market serious enough to extend their tick-tock cadence to it.
Does anyone know anymore about this and or can explain what the relation is between frame rates and survey data?<p><i>"User experience (quantified through high speed cameras mapping frame rates to user survey data)"</i>
They say they proved the atom performance advantage, but the linked article says they had nothing but a JavaScript test which probably has significant code quality differences between the Intel and Arm JITs.
The problem is Intel has already lost on the performance side. Intel has always needed 3 things to beat ARM, and they need all 3 to be competitive: performance, energy efficiency, price.<p>With Atom, they've <i>always</i> had the performance advantage. In fact they've had a big advantage over ARM at the time when when Atom came out. It was like several times more powerful than the ARM chip inside iPhone 3G.<p>But to be competitive, Intel needed to compete on the other 2 levels, as well. So they kept Atom's performance mostly the same, while they tried over the years to push Atom down from a 10W TDP. But while Intel tried to lower the power consumption, ARM's chips were gaining in performance - faster than Intel. So now when Intel has finally reached parity with ARM in energy efficiency, ARM has surpassed it in performance with Cortex A15, and from what I've seen so far, by a lot.<p>Then there's also the pricing issue. I think they've improved there quite a bit. An Atom chip used to be close to $100. Now it's probably more like $40-$50 (this dual core version). But that's still like at least 50% more than an ARM chip.<p>So Intel still has a lot of catching up to do in "overall competitiveness", because right now it looks like they've only switched the much bigger performance leadership for "energy efficiency parity", and are losing at the other two. I'd like to see how high they are willing to go with Atom on performance, until they realize that it's going to cannibalize their Core i3 market.
That's some bad copy write from the start:<p>><i>The untold story of Intel's desktop (and notebook) CPU dominance after 2006 has nothing to do with novel new approaches to chip design or spending billions on keeping its army of fabs up to date. While both of those are critical components to the formula</i><p>If those components are "critical to the formula" then you cannot say that "it has nothing to do" with them.<p>Critical means: it has absolutely something to do with them too.