An obvious place for a backdoor is in remote management CPUs embedded in the network card.<p><a href="http://www.ssi.gouv.fr/IMG/pdf/csw-trustnetworkcard.pdf" rel="nofollow">http://www.ssi.gouv.fr/IMG/pdf/csw-trustnetworkcard.pdf</a><p>Network cards which support RMCP/IPMI protocol are obvious points of attack. They can reboot machines, download boot images, install a new OS, patch memory, emulate a local console, and control the entire machine. CERT has some warnings:<p><a href="https://www.us-cert.gov/ncas/alerts/TA13-207A" rel="nofollow">https://www.us-cert.gov/ncas/alerts/TA13-207A</a><p>If there's a default password in a network card, that's a backdoor. Here's a list of the default passwords for many common systems:<p><a href="https://community.rapid7.com/community/metasploit/blog/2013/07/02/a-penetration-testers-guide-to-ipmi" rel="nofollow">https://community.rapid7.com/community/metasploit/blog/2013/...</a><p>"admin/admin" is popular.<p>The network card stores passwords in non-volatile memory. If anyone in the supply chain gets hold of the network card briefly, they can add a backdoor by plugging the card into a chassis for power, connecting a network cable, and adding a extra user/password of their own using Linux "ipmitool" running on another machine. The card, when delivered to the end user, now has a backdoor installed. If you have any servers you're responsible for, try connecting with IPMI and do a "list" command to see what users are configured. If you find any you didn't put there, big problem.<p>CERT warns that, if you use the same userid/password for multiple machines in your data center, discarded boards contain that password. So discarded boards must be shredded.
While the main point of the article is interesting, some of the details don't really make sense.<p>For example, it would be difficult to make an instruction like fyl2x or fadd cause a privilege level change. The reason is that floating point instructions are executed on a separate unit (the FPU), with a separate decoder. This unit would not have the means to communicate back information such as "change privilege level" (normally it can only signal floating point exceptions, and other than that its only output is on the floating point registers). It would make more sense to encode the backdoor on an illegal opcode, i.e. an opcode that under normal conditions would generate a UD# exception, but with the correct values in the registers it would trigger some undocumented behavior.<p>Another question is how to hide this backdoor in the microcode. Presumably, at some point someone might stumble upon the backdoor and ask around about it. If the backdoor depends on some "magic values", it would be relatively easy to spot just by looking at the microcode.<p>There's also the point that the author mentioned of "fixing" the processor at some point during the production process. I don't think that the author understands the way mass production of microchips works. It's very much not possible to do something like this while keeping the production price on the same level (or someone noticing this extra step in the production process).<p>All in all, it sounds much easier to find security bugs in other parts of the system.
Who needs dirty trace-able CPU backdoors when Intel's SGX technology will allow them perfect plausible deniability to give NSA (or China if they force them by law) the key to all "secure apps" that will be using the SGX technology:<p>> <i>Finally, a problem that is hard to ignore today, in the post-Snowden world, is the ease of backdooring this technology by Intel itself. In fact Intel doesn't need to add anything to their processors – all they need to do is to give away the private signing keys used by SGX for remote attestation. This makes for a perfectly deniable backdoor – nobody could catch Intel on this, even if the processor was analyzed transistor-by-transistor, HDL line-by-line.</i><p><a href="http://theinvisiblethings.blogspot.com/2013_09_01_archive.html" rel="nofollow">http://theinvisiblethings.blogspot.com/2013_09_01_archive.ht...</a>
The Novena laptop seems almost devoid of backdoors. <a href="http://www.wired.co.uk/news/archive/2014-01/20/open-source-laptop" rel="nofollow">http://www.wired.co.uk/news/archive/2014-01/20/open-source-l...</a>
A serious flaw in AMDs System Management Unit Firmware was very recently discovered:<p><a href="http://media.ccc.de/browse/congress/2014/31c3_-_6103_-_en_-_saal_2_-_201412272145_-_amd_x86_smu_firmware_analysis_-_rudolf_marek.html#video" rel="nofollow">http://media.ccc.de/browse/congress/2014/31c3_-_6103_-_en_-_...</a>
Cool article. I didn't understand how the privilege escalation would be exploited. Obviously if the attacker already has access to the box, he can get root with this exploit.<p>I think a chip backdoor could also be based on information leaking rather than executing arbitrary code.<p>The steps would be:
1. Identify critical info, like crypto keys, from heuristics. This means keeping a special buffer, since you don't know at the beginning of an RSA operation that it's an RSA operation. The heuristics are not perfect, of course, but work with standard apps like Firefox, GPG and Outlook.<p>2. Exfiltrate the info. Via spread-spectrum RF, timing jitter in packets, or replacing random numbers in crypto. The article implies that since OSes and apps mix the hardware RNG with other sources, there's no point in subverting it. But the CPU can recognize common mix patterns, like in the Linux kernel, and subvert the final output.<p>In this case the output entropy is good, but also leaks some secret to a listener who has the right keys.
Another recent article on HN <a href="https://news.ycombinator.com/item?id=8813029" rel="nofollow">https://news.ycombinator.com/item?id=8813029</a> on Intel Management Engine.
CPU backdoors are a very real concern, but not only in the CPU but in the growing complexity of the motherboard chipset. For example, a malicious memory controller could manipulate data on the way to the CPU, causing a faithful CPU to do malicious things.<p>For highly secured systems, this is of growing concern. With the amount of stuff made in China the supply chain is considered a considerable attack surface which has to be considered when sourcing electronics.
Given the fact that the NSA targets linux users [0], is it really that far fetched that they could be adding backdoors to CPUs ordered by certain NSA targets?<p>I'm assuming most linux enthusiasts build their own rigs, as do I.<p>[0] <a href="http://www.linuxjournal.com/content/nsa-linux-journal-extremist-forum-and-its-readers-get-flagged-extra-surveillance" rel="nofollow">http://www.linuxjournal.com/content/nsa-linux-journal-extrem...</a>
for many modern desktops/laptops (including recent Apple machines, which i don't think was the case even just a few product cycles ago), Intel's vPro appears capable of many forms of surveillance/subversion.<p>in terms of understanding/mitigating these types of threats, i wish an open, crowdfunded project to reverse engineer the contents of intel's microcode updates existed to the point they were understandable by the tech press.<p>i also wish an easy-to-use package for blacklisting cpu-based and crypto-related kernel modules (like aes-ni) existed for a broad range of processors..<p>and of course only somewhat relatedly, i continue to wish the man page for random(4) would be rewritten in light of the risk of these types of backdoors.
here is another article about CPU backdoors,<p><a href="http://theinvisiblethings.blogspot.com/2009/03/trusting-hardware.html" rel="nofollow">http://theinvisiblethings.blogspot.com/2009/03/trusting-hard...</a><p>and the discussion in the comment section of that one is good and contains some interesting pointers for further sources on this topic...<p>Also, here is a phrack article "System Management Mode Hack" on how to exploit Intel system management mode (with code at the end of the article).<p><a href="http://phrack.org/issues/65/7.html" rel="nofollow">http://phrack.org/issues/65/7.html</a>
It seems very unlikely that someone would be able to "apply the edit to a partially finished chip". The adding of a fix like this is probably some of the most scrutinized processes in hardware design. After spending years designing and verifying chip functionality and getting the timing exactly right before production starts there is a very high bar for getting these fixes in to the production flow because if the fix screws anything else up you are FUBARed. Given that, it is probably the hardest place you could ever try and put a back door.