Working in microprocessors, I hear this a lot, but, in the long run, Intel has a fundamental advantage over ARM, and ARM doesn't seem to have a fundamental advantage over Intel [1].<p>People talk about RISC vs. CISC, and how ARM can be lower power because RISC instructions are easier to decode, but I don't hear that from anyone who's actually implemented both an ARM and an x86 front-end [2]. Yes, it's a PITA to decode x86 instructions, but the ARM instruction set isn't very nice, either (e.g., look at how they ran out of opcode space, and overlayed some of their "new" NEON instructions on top of existing instructions by using unused condition codes for existing opcodes). If you want to decode ARM instructions, you'll have to deal with having register fields in different places for different opcodes (which uses extra logic, increasing size and power), decoding deprecated instructions which no one actually uses anymore (e.g., the "DSP" instructions which have mostly been superseded by NEON), etc. x86 is actually more consistent (although decoding variable length instructions isn't easy, either, and you're also stuck with a lot of legacy instructions) [X].<p>On the other hand, Intel has had a process (manufacturing) advantage since I was in high school (in the late 90s), and that advantage has only increased. Given a comparable design, historically, Intel has had much better performance on a process that's actually cheaper and more reliable [3]. Since Intel has started taking power seriously, they've made huge advances in their low power process. In a generation or two, if Intel turns out a design that's even in the same league as ARM, it's going to be much lower power.<p>This reminds me of when people thought Intel was too slow moving, and was going to be killed by AMD. In reality, they're huge and have many teams working a large variety of different projects. One of those projects paid off and now AMD is doomed.<p>ULV Haswell is supposed to have a TDP ~10W with superior performance to the current Core iX line [4]. Arm's A15 allegedly has a TDP of ~4W, but if you actually benchmark the parts, you'll find that the TDPs aren't measured the same way. A15 uses a ton of power under load, just like Haswell will [5]. When idle, it won't use much power, and will likely have worse leakage, because Intel's process is so good. And then there's Intel's real low power line, which keeps getting better with every generation. Will a ULV version of a high-end Intel part provide much better performance than ARM at the same power in a couple generations, or will a high performance version of a low-power low-cost Intel part provide lower power at the same level of performance and half the price? I don't know, but I bet either one of those two things will happen, or that new project will be unveiled that does something similar. Intel has a ton of resources, and a history of being resilient against the threat of disruption.<p>I'm not saying Intel is infallible, but unlike many big companies, they're agile. This is a company that was a dominant player in the DRAM and SRAM industry that made the conscious decision to drop out the DRAM industry and concentrate on SRAMs when DRAM became less profitable, and then did the same for SRAMs in order to concentrate on microprocessors. And, by the way, they created the first commercially available microprocessor. They're not a Kodak or Polaroid; they're not going to stand idle while their market is disrupted. When Toshiba invented flash memory, Intel actually realized the advantage and quickly became the leading player in flash, leaving Toshiba with the unprofitable DRAM market.<p>If you're going to claim that someone is going to disrupt Intel, you not only have to show that there's an existing advantage, you have to explain why, unlike in other instances, Intel isn't going to respond and use their superior resources to pull ahead.<p>[1] I'm downplaying the advantage of ARM's licensing model, which may be significant. We'll see. Due to economies of scale, there doesn't seem to be room for more than one high performance microprocessor company [6], and yet, there are four companies with ARM architecture licences that design their own processors rather than just licensing IP. TI recently dropped out, and it remains to be seen if it's sustainable for everyone else (or anyone at all).<p>[2] Ex-Transmeta folks, who mostly when to Nvidia, and some other people whose project is not yet public.<p>[3] Remember when IBM was bragging about SOI? Intel's bulk process had comparable power and better performance, not to mention much lower cost and defect rates.<p>[4] <a href="http://www.anandtech.com/show/6355/intels-haswell-architecture" rel="nofollow">http://www.anandtech.com/show/6355/intels-haswell-architectu...</a><p>[5] Haswell hasn't been released yet, but Intel parts that I've looked at have much more conservative TDP estimates than ARM parts, and I don't see any reason to believe that's changed.<p>[6] IBM seems to be losing more money on processors every year, and the people I know at IBM have their resumes polished, because they don't expect POWER development to continue seriously (at least in the U.S.) for more than another generation or two, if that. Oracle is pouring money into SPARC, but it's not clear why, because SPARC has been basically dead for years. MIPS recently disappeared. AMD is in serious trouble. Every other major vendor was wiped out ages ago. The economies of scale are unbelievably large.<p>[X] Sorry, I'm editing this and not renumbering my footnotes. ARMv8 is supposed to address some of this, by creating a large, compatibility breaking, change to the ISA, and having the processor switch modes to maintain compatibility. It's a good idea, but it's not without disadvantages. The good news is, you don't have to deal with all this baggage in the new mode. The bad news is, you still have the legacy decoder sitting there taking up space. And space = speed. Wires are slow, and now you're making everything else travel farther.