An Analysis of iPhone “Error 53”: Poorly Implemented Protection of a Secure System

By | February 14, 2016

This article originally appeared in Diginomica as iPhone Error 53 – a study in bungled user experience, but great security


TouchIDApple is one of the most polarizing tech companies around, attracting both loyal supporters and equally strident critics whenever it does something remotely newsworthy. The latest dustup concerns an ambiguous, but apparently fatal error that some iPhone users report when trying to upgrade to the latest system version, iOS 9. According to a report in the Guardian publicizing the phenomenon,

“The issue appears to affect handsets where the home button, which has touch ID fingerprint recognition built-in, has been repaired by a ‘non-official’; company or individual. It has also reportedly affected customers whose phone has been damaged but who have been able to carry on using it without the need for a repair.”

Upon installing iOS 9, these users faced a wholly nondescript message reading, “The iPhone ‘iPhone’ could not be restored. An unknown error occurred (53).” Worse, yet, there’s no easy way to get past it: the phone is seemingly bricked along with any unique and unbacked up data.

Screen-Shot-2015-03-13-at-19.49.15

Why Would Apple Intentionally Brick a Phone?

Of course, there’s much more to the story and the details are traceable to the iPhone’s sophisticated hardware-based security. Indeed, this is a case where Apple can be praised for doing the right, and perhaps only reasonable thing in the worst possible way. Although Apple hasn’t confirmed causality, it turns out, the error typically (always?) occurs on phones where the Touch ID home button has been replaced with an aftermarket, non-Apple-authorized facsimile. This may seem like an arbitrarily punitive response by a greedy company looking to maximize repair revenues, however when one considers the security function of Touch ID, it’s entirely logical and a virtual requirement for Apple to assure the integrity of the hardware-based biometric security system that is the foundation of trust upon which its Apple Pay mobile payment platform is based.

Understanding why requires looking at the details of Touch ID’s implementation. The home button scanner takes extremely high-resolution pictures of a fingerprint, including “minor variations in ridge direction caused by pores and edge structures”. As Apple describes,

“It then creates a mathematical representation of your fingerprint and compares this to your enrolled fingerprint data to identify a match and unlock your device. Touch ID will incrementally add new sections of your fingerprint to your enrolled fingerprint data to improve matching accuracy over time.”

Here is where the iPhone’s hardware security kicks in. Instead of storing this mathematical representation, which to us sounds like a cryptographic hash, of your fingerprint as a password online in iCloud, Apple uses dedicated memory, called the Secure Enclave, built into each iPhone A-Series SoC.

“Touch ID doesn’t store any images of your fingerprint. It stores only a mathematical representation of your fingerprint. It isn’t possible for someone to reverse engineer your actual fingerprint image from this mathematical representation. The chip in your device also includes an advanced security architecture called the Secure Enclave which was developed to protect passcode and fingerprint data. Fingerprint data is encrypted and protected with a key available only to the Secure Enclave. Fingerprint data is used only by the Secure Enclave to verify that your fingerprint matches the enrolled fingerprint data. The Secure Enclave is walled off from the rest of the chip and the rest of iOS. Therefore, iOS and other apps never access your fingerprint data, it’s never stored on Apple servers, and it’s never backed up to iCloud or anywhere else. Only Touch ID uses it, and it can’t be used to match against other fingerprint databases.”

Source: UNMITIGATED RISK

Source: https://unmitigatedrisk.com/?p=389

This explains why Apple effectively bans third-party fingerprint scanners on the iPhone. There’s nothing but Apple’s iOS bootloader preventing a rogue home button with embedded firmware from executing a Man-in-the-Middle (MitM) attack by creating a copy of the fingerprint representation before passing it onto the Secure Enclave. Of course, the attackers would need to reverse engineer Apple’s hash function (“mathematical model”), no doubt a daunting task, however with enough trial and error (remember, the Secure Enclave will have the valid copy of the hash output) it’s conceivable. Having the digital version of one’s print would allow unlocking all kinds of things on the phone, including Apple Pay.

Mobile Payments: A Matter of Trust

Perhaps the most compelling feature of Apple Pay is the fact that it doesn’t store, nor use your actual credit or debit card numbers when making a transaction. According to Apple,

“When you add your card, a unique Device Account Number is assigned, encrypted, and securely stored in the Secure Element … When you make a purchase, the Device Account Number, along with a transaction-specific dynamic security code, is used to process your payment. So your actual credit or debit card numbers are never shared by Apple with merchants or transmitted with payment. And unlike credit cards, on iPhone and iPad every payment requires Touch ID or a passcode, and Apple Watch must be unlocked — so only you can make payments from your device.”

Should a rogue Touch ID sensor be able to replicate the digital fingerprint model (hash), it could allow attackers to compromise the entire Apple Pay reservoir of device account numbers and create transactions unbeknownst to the iPhone owner. Since mobile e-comm sites and apps are now integrating Apple Pay into their checkout process, it would be relatively easy to remotely monetize compromised accounts without getting near an NFC PoS terminal. In this context, an Apple representative’s statement to the Guardian sounds much less capricious,

“When iPhone is serviced by an authorized Apple service provider or Apple retail store for changes that affect the touch ID sensor, the pairing [beteween device and sensor] is re-validated. This check ensures the device and the iOS features related to touch ID remain secure. Without this unique pairing, a malicious touch ID sensor could be substituted, thereby gaining access to the secure enclave. When iOS detects that the pairing fails, touch ID, including Apple Pay, is disabled so the device remains secure.”

My Take

Apple Pay, used by an estimated 10-20% of users with capable devices and supported by millions of stores, is North America’s most successful mobile payment platform. Yet adoption has been slow compared to other Apple services due in part to people’s unfamiliarity with and resulting distrust of the technology. Aside from convenience, the fact that the system is far more secure than traditional payment methods is undoubtedly a key factor for many early adopters. Their trust in Apple’s security would be instantly undone if the Touch ID-Apple Pay system were compromised by rogue third-party hardware, damage that would jeopardize its roll out in China and other large markets.

We applaud Apple for doing the right thing to protect its security technology, but must chastise both their utter lack of communication about the necessity of authorized repairs for the Touch ID button assembly and the equally opaque error message presented to users should they use a third-party component. This is a classic case of nailing the product design, but bungling the user experience and presents a teachable moment for other organizations implementing sophisticated technology: fail gracefully when users do the unexpected and don’t leave them in the dark when the unusual invariably happens.