But it’s also worth pointing out that newer iOS devices — those with an “A7 or later A-series processor” — also add substantial hardware protections to thwart device cracking. In the rest of this post I’m going to talk about how these protections may work and how Apple can realistically claim not to possess a back door.
Normal password-based file encryption systems take in a password from a user, then apply a key derivation function (KDF) that converts a password (and some salt) into an encryption key. This approach doesn’t require any specialized hardware, so it can be securely implemented purely in software provided that (1) the software is honest and well-written, and (2) the chosen password is strong, i.e., hard to guess. The problem here is that nobody ever chooses strong passwords.
In fact, since most passwords are terrible, it’s usually possible for an attacker to break the encryption by working through a ‘dictionary‘ of likely passwords and testing to see if any decrypt the data. To make this really efficient, password crackers often use special-purpose hardware that takes advantage of parallelization (using FPGAs or GPUs) to massively speed up the process. Thus a common defense against cracking is to use a ‘slow’ key derivation function like PBKDF2 or scrypt.
Each of these algorithms is designed to be deliberately resource-intensive, which does slow down normal login attempts — but hits crackers much harder. Unfortunately, modern cracking rigs can defeat these KDFs by simply throwing more hardware at the problem. There are some approaches to dealing with this — this is the approach of memory-hard KDFs like scrypt — but this is not the direction that Apple has gone.
Apple doesn’t use scrypt. Their approach is to add a 256-bit device-unique secret key called a UID to the mix, and to store that key in hardware where it’s hard to extract from the phone. Apple claims that it does not record these keys nor can it access them. On recent devices (with A7 chips), this key and the mixing process are protected within a cryptographic co-processor called the Secure Enclave. The result is the ‘passcode key’.
That key is then used as an anchor to secure much of the data on the phone. Since only the device itself knows UID — and the UID can’t be removed from the Secure Enclave — this means all password cracking attempts have to run on the device itself. That rules out the use of FPGA or ASICs to crack passwords. Of course Apple could write a custom firmware that attempts to crack the keys on the device but even in the best case such cracking could be pretty time consuming, thanks to the 80ms PBKDF2 timing.
Apple pegs such cracking attempts at 5 1/2 years for a random 6-character password consisting of lowercase letters and numbers. PINs will obviously take much less time, sometimes as little as half an hour. Choose a good passphrase! So one view of Apple’s process is that it depends on the user picking a strong password. A different view is that it also depends on the attacker’s inability to obtain the UID.
Let’s explore this a bit more. The Secure Enclave is designed to prevent exfiltration of the UID key. On earlier Apple devices this key lived in the application processor itself. Secure Enclave provides an extra level of protection that holds even if the software on the application processor is compromised — e.g., jailbroken. One worrying thing about this approach is that, according to Apple’s documentation, Apple controls the signing keys that sign the Secure Enclave firmware.
So using these keys, they might be able to write a special “UID extracting” firmware update that would undo the protections described above, and potentially allow crackers to run their attacks on specialized hardware. Which leads to the following question, How does Apple avoid holding a backdoor signing key that allows them to extract the UID from the Secure Enclave,


0 Comments