There's another technical surveillance method here that I feel more people should be talking about: monitoring iMessage communication.<p>iMessage is extremely secure[1], except for the fact that Apple controls the device list for iCloud accounts. The method would simply be for Apple to silently add another device to a target's account which is under law enforcement's control. I say "silently" in that they would need to hide it from the target's iCloud management UI to stay clandestine, but that's it, just a minor UI change. iMessage clients will then graciously encrypt and send a copy of every message to/from the target to the monitoring device.<p>This would still work even with impossible-to-crack encryption. It wouldn't allow access to old messages, just stuff after the monitoring was enabled. It's the modern wiretap.<p>It mirrors wiretapping in that sufficiently sophisticated people could discover the "bug" by looking at how many devices iMessage is sending copies to when messaging the target (just inspecting the size of outgoing data with a network monitoring tool would probably suffice), but it would go a long way and probably be effective for a high percentage of cases.<p>The main thrust of the article is that encryption is not new, just the extent of it, particularly iMessage. Here's a way around that.<p>[1] <a href="http://images.apple.com/iphone/business/docs/iOS_Security_Feb14.pdf" rel="nofollow">http://images.apple.com/iphone/business/docs/iOS_Security_Fe...</a>
> The Secure Enclave is designed to prevent exfiltration of the UID key. On earlier Apple devices this key lived in the application processor itself, and could (allegedly) be extracted if the device was jailbroken and kernel patched.<p>Speaking as a jailbreaker, this is actually incorrect. At least as of previous revisions, the UID key lives <i>in hardware</i> - you can ask the hardware AES engine to encrypt or decrypt using the key, but not what it is. Thus far, neither any device's UID key or (what would be useful to jailbreakers) the shared GID key has been publicly extracted; what gets extracted are secondary keys derived from the UID and GID keys, but as the whitepaper says, the passcode lock key derivation is designed so that you actually have to run a decryption with the UID to try a given passcode. Although I haven't looked into the newer devices, most likely this remains true, since there would be no reason to decrease security by handing the keys to software (even running on a supposedly secure coprocessor).
Now if only it was possible to turn off remote installation of applications on both iOS and Android devices, this kind of security would actually mean something.<p>Right now, you can do full disk encryption on an Android device (which seems likely to become hardware-assisted on future devices similar to the solution mentioned in the article). If you pick a sufficiently strong passphrase, that should keep your data secure even on devices without hardware assistance. However, if the device is turned on and locked (the common case), it's trivial to remote-install an arbitrary app, including one that unlocks the phone. (You can do this yourself with your Google account, which means anyone with access to that account can do so as well.)<p>It would help to be able to disable remote installation of any kind; that wouldn't have to come at the expense of convenience, because the phone could just prompt (<i>behind the lockscreen</i>) to install a requested app.
If someone obtains your phone, and prevents you from initiating a remote wipe (perhaps they have you in custody, or perhaps they have isolate the phone so that it cannot receive the wipe command), it sounds like this technology will do a good job of preventing them from decrypting your data from the phone if you have a decent passcode. They cannot throw GPUs or FPGAs or clusters or other custom hardware at the problem of brute forcing your passcode because each attempt requires computation done by the Secure Enclave using data only available in the Secure Enclave. That limits them to trying to brute force with no parallelization and 80ms per try [1].<p>However, assuming they have an appropriate warrant, can't they get your iCloud backups and try to brute force those? Maybe I'm being an idiot and overlooking something obvious, but it seems to me the encryption on the backups CANNOT depend on anything in the Secure Enclave.<p>That's because one of the use cases that iCloud backup has to support is the "my phone got destroyed, and now I want to restore my data to my new phone" case. To support this, it seems to me that the backup encryption can only depend on my iCloud account name and password. They can throw GPUs and FPGAs and all the rest at brute forcing that.<p>My conclusion then is that when I get a new iPhone, I should view this as a means of protecting my data on the phone only. It lets me be very secure against an attacker who obtains my phone, but not my backups, provided I have a good passcode, where "good" can be achieved without having to be so long as to be annoying to memorize or type. A passcode equivalent to 32 random bits would take on average over 5 years to brute force.<p>To protect against someone who can obtain my backups, I need a good password on iCloud, where "good" means something considerably more than equivalent to 32 bits.<p>[1] I wonder if they could overclock the phone to make this go faster?
<p><pre><code> 1. [...]
2. [...]
3. [...]
4. [...]
5. The manufacturer of the A7 chip stores every UID for
every chip.
</code></pre>
I'm a total layman, but the UID has to be created at some point and so it can be known by someone. Wouldn't it be the easiest way to just record it for every chip? Apple wouldn't even have know about it.
> (Apple pegs such cracking attempts at 5 1/2 years for a random 6-character password consisting of lowercase letters and numbers. PINs will obviously take much less time, sometimes as little as half an hour. Choose a good passphrase!)<p>Do not use simple pin passwords on your phone. In particular, if you use fingerprint access, there is no reason not to have a long, complex password.
I read through the article - including the hand wavey "Apple has never been cracking your data conclusion" - but I don't understand what has changed since previous versions of iOS other than more data being encrypted.<p>Apple claims they can't decrypt data, but, the article suggests that they can simply run the decryption on the local phone with custom firmware. Most people chose a 4 digit pin, and, @80 millisecond/guess, that means Apple should be able to crack your phone in 12 minutes.<p>If you use a longer passcode, your data is more secure - but I thought that was <i>always</i> the story with Apple.<p>So what, if anything, has changed (other than more data being encrypted?)
Invasive attacks for extracting the UID depend on exactly how it's 'implemented in hardware'.<p>It could be a total lie, and hardwired or masked-rom per-revision (but I doubt that, too easy to discover)<p>It could be in a one-time programmable block somewhere that gets provisioned during manufacture - a flash block/eeprom with write-only access (externally at least), or a series of blowable fuses, or even laser/EB-trimmed traces.<p>All of those 1-time prog methods are susceptible to the person operating the programmer to record the codes generated, although managing and exfiltrating that much data would make it rather tricky.<p>The method of storage also influences how hard it is to extract through decapping and probing/inspection.<p>If I had to design something like this (note: not a crypto engineer), I'd have some enclave-internal source of entropy run through some whitening filters, and read from until it meets whatever statistical randomness/entropy checks, at which point it is used to seed & store the UID into internal EEPROM. That way, there's no path in <i>or</i> out for the key material, except when already applied via one of the supported primitives.<p>Then you need to protect your secrets! Couple of layers of active masks (can they do active resistance/path-length measurements instead of just continuity yet? That would annoy the 'bridge & mill' types :)) Encrypted buses, memory, and round-the-houses-routing is also pretty standard for the course, but I'm sure it too could be improved on.<p>IIRC there was someone on HN who was working for a TV smartcard mfgr who was reasonably confident they'd never been breached. Curious what he'd have to say (without an NDA :) )
> Apple doesn't use scrypt. Their approach is to add a 256-bit device-unique secret key called a UID to the mix, and to store that key in hardware where it's hard to extract from the phone. Apple claims that it does not record these keys nor can it access them.<p>Technically, this is where it breaks down. As in "Trust me I don't store the keys."<p>If that hypothesis is true(they don't store these keys), then they'll have a hard time breaking your encryption indeed. But you must trust Apple at that point.<p>If there was a way to buy an anonymously replaceable chip with this cryptographic key in it and replace it on the phone like a SIM, then we'd be much closer to stating "Apple can't decrypt your phone".
> Secure Enclave allows firmware updates -- but before doing so, the Secure Enclave will first destroy intermediate keys. Firmware updates are still possible, but if/when a firmware update is requested, you lose access to all data currently on the device.<p>Given that the end-user has entered the passcode it shouldn't be hard to retain the data: after upgrading the Secure Enclave firmware simply unencrypt all data using the old key and reencrypt it using the new key (derived from same passphrase but a new UID).<p>You can also use a "two stage" approach where the encryption key derived in hardware is only used to protect a secondary key. In this case you just reencrypt this secondary key which in turn protects the data.
Is Apple's "Secure Enclave" anything more than ARM's TrustZone?<p><a href="http://www.arm.com/products/processors/technologies/trustzone/index.php" rel="nofollow">http://www.arm.com/products/processors/technologies/trustzon...</a>
> <i>Apple has built a nuclear option. In other words, the Secure Enclave allows firmware updates -- but before doing so, the Secure Enclave will first destroy intermediate keys. Firmware updates are still possible, but if/when a firmware update is requested, you lose access to all data currently on the device.</i><p>That seems ideal. Let's hope Apple actually does that (probably not).
Two questions:<p>* Regarding the fixed 80ms timing: has there been study on the average time needed (aside from the WHY 80ms instead of 70ms or 90ms). I also want to ask for clarification: where is the entire PBKDF2-AES is done? On the AES engine (which I believe is part of the A7 chip)? On a TPM chip (which might be a NO based on unauthenticed source [1])?<p>* So this UID created in every device and stored in Secure Enclaved which there is a dedicated path between SE and AES engine. But can we conduct any side-channel attack? I am pretty noob with hardware security.
I wonder, is it in the realms of possibility for big-budget organizations like the NSA to simply read the UID from the silicon by means of physical analysis (e.g. a scanning tunneling microscope)?
The security is based on the premise that Apple is unable to decrypt as they do not keep a record of the devices unique ID that is the base of all the cryptography.<p>What if that is not true?
What if the device has a built-in keylogger to just get all the crypto from the user input? be it a passcode or a fingerprint.<p>Wouldn´t it be partly better if this were based on a trully public key cryptography with a randomly generated private key generated each time the device is factory reset?
With all the talks about supper crypto tech use in the Iphone, isn't cloning phone's data as simple as declaring your phone is loss and paying a mobile phone store clerk a new phone and reset the password for the I-Account? The new iphone would have access to all your info, pictures, msg logs in 1-2 hours?<p>Is this any reasonable PI, police, FBI, hackers can easily social engineer via legal and/or illegal means?
If you're wondering when Secure Enclave first appeared, I looked up the A7 processor. It's in the iPhone 5S, the iPad Air, and the iPad Mini (2nd generation).
So the article's answer to "Why can't Apple decrypt your iPhone?" is: "Because Apple says that no software can extract the UID".<p>In our post-Snowden world this is just ridiculous and intellectually insulting. The author is either naive beyond belief or he got paid to write this PR shill piece.<p>cf. <a href="https://gigaom.com/2014/09/18/apples-warrant-canary-disappears-suggesting-new-patriot-act-demands/" rel="nofollow">https://gigaom.com/2014/09/18/apples-warrant-canary-disappea...</a>
All the criticisms I've yet seen of Apple's iMessage security comes down to "yes it's probably completely locked down now and for all historical messages, but here's this obscure way they could open it up for messages in the future therefore it's not secure".<p>Well duh! It's their software. Of course they could backdoor it in future, such as if required to by the government. That's true of any software. Apple are asserting that right now there are no such backdoors and iMessages are secure. I've not seen any credible argument that this is not the case other thst "maybe they're lying". Ok. What's the alternative? Run everything through OpenSSL? That didn't worm out do well. Maybe we should run everything on Linux using Bash scripts. Oops again!<p>Maybe Apple are lying. Maybe they will sell us all out. But if they do these things always have a tendency to come out in the open eventually. So far they've had a pretty good track record of being on the level. In the end it's tfag reputation, and their appreciation of its value, that is the best and really the only guarantee we have, as with anyone else we rely on.