Defense Against the Dark Arts

How the FBI likely hacked the San Bernadino shooter’s iPhone, and what this means for information security as a whole

To compromise Syed Farook’s iPhone, the FBI and its associates likely employed “dark arts” techniques used by black hat hackers

The FBI has officially dropped their case against Apple over the San Bernadino shooting, noting ominously that the “government has now successfully accessed the data stored on Farook’s iPhone and therefore no longer requires the assistance from Apple Inc.”

This was a bit of a shock for the infosec and cryptography crowd. The FBI’s vehement testimony has so far showed that the bureau employed an exhaustive battery of complex attacks in order to compromise the phone of Syed Farook, one of the shooters in the San Bernadino killings. For them to have discovered — and implemented — an attack so abruptly is like a bolt of lightning from blue skies.

Details are slim to nil about exactly what happened. The FBI was apparently approached by third party security firm, who offered to aid them in compromising the phone. A week later the FBI returned announcing that they had been successful in compromising Farook’s data. The FBI has commented as well that the method was specific to Farook’s model of iPhone and version of iOS, and that it was unlikely to be helpful in compromising other phones (something that has come up after other law enforcement agencies have lobbied the FBI for their help in compromising other devices for similar ends).

Whatever vector was used to get past iOS’ security and cryptography must have been very complex. Due to the attack’s unpublished / unaddressed nature, the vector enabling the FBI to compromise the phone is also likely still available in every Apple phone in the wild.

Like a medieval castle, the iPhone is protected by layers of independent defenses to provide an architecture called “defense in depth.”

The FBI’s objective was to void both the cryptography of the iPhone as well as iOS’ inherent anti-brute force measures to access data from Farook like his last text messages prior to / during the attack.

To understand how hard this task is, we should probably spend some time talking about the iPhone’s data security. Security for data at rest on the iPhone (a term we use in infosec to refer to data stored locally on a device/infrastructure) is comprised of a set of independent features to provide defense in depth — a series of successively more difficult challenges for an attacker to overcome not unlike the layers of walls that surround a castle:

  • The Moat: An access control system within iOS that restricts access to the device and its data without the use of a password. Users are presented this system in the form of the Lock Screen.
  • The Curtain Wall: Redundant security systems that enforce access control and attempt to detect and stop attacks to circumvent it. In the San Bernadino investigation, the “curtain wall” feature of most concern is the “auto-wipe” functionality that destroys the phone’s local data if N-number of incorrect passwords have been submitted. If this feature is triggered by submitting too many incorrect passwords at the lock screen, all data on the device is wiped.
  • The Keep: Encryption. The final line of defense for all Apple data stored on the device is a layer of AES-256 encryption that is generated from a local pseudo-random number generator. For comparison, AES-256 is the same algorithm and key strength used by the US Federal government to protect Top Secret-level classified information.
Side channel attacks on the iPhone’s physical memory have remained one of the few known ways to theoretically compromise a device

Compromising an iPhone to gain access to the raw data on the handset requires one to circumvent the full defense in depth architecture of the iPhone. This is extremely difficult, but there are two likely vectors the FBI’s contractors employed.

The first vector is a side-channel attack on the physical memory that stores all of the phone’s data. Side-channel attacks are a class of attacks focused on compromising an encrypted system by navigating around the encryption rather than trying to break that encryption with mathematics (another class of attacks called cryptanalysis).

According to experts within infosec, there are three likely methods that the FBI could have used to perform a side channel on the iPhone’s flash memory:

Key Recovery Attack:
When iPhone deletes its data after N-numbers of failed login attempts at the lock screen, it actually isn’t destroying anything. Instead the phone deletes its own local copy of the encryption key protecting the data, effectively cryptoshredding the data by rendering the system to be unable to read the local drive.

A key recovery attack focuses on extracting the key used in physical memory on the device. It’s very hard to preform because it requires the attacker to discover the range in memory where the key is located, a very non-trivial problem that is further stymied by the fact that smart cryptographers will frequently move memory locations.

Count Reset Attack:
Another way to cheat the iPhone’s N-number failed login feature is to simply alter the variable that the system uses to count the number of failed logins, resetting that variable as it approaches N and allowing the FBI to continue attempting to brute force (i.e.: “guess”) their way past the lock screen.

Like the key recovery attack however, a count reset attack requires you to detect and alter a range of memory. Figuring out the location of a single variable is extremely difficult however, and given the computing architecture of the phone it may be nearly impossible* to execute in real life.

De-Capping Attack:
The final side-channel attack that the FBI could employ on the iPhone is arguably the hardest to do. Rather than extracting data from memory, this attack requires that the attackers truly embrace their inner bond villain and use lasers and acid.

A de-capping attack is a side-channel attack that circumvents encryption by forcefully connecting a memory chip to another computer in a way that avoids higher-level software encryption. Frequently, most encrypted systems hold data in a cleartext (or unencrypted) state, but guard access to that system in a way that is layered in encryption. In the case of the iPhone data on the chip may be held in cleartext. But access to that data may involve one winding their way through memory in a complex way — one guarded by a “map” of encryption.

De-capping works to circumvent all of this by soldering contacts from the unencrypted portions of data in memory to a new computer. The attacker must take great care when unentangling the memory chip from its current place on the original computer’s PCB as to not damage the memory, and many de-capping attacks involve the use of acids or even precision industrial lasers to minimize the probability of damaging and corrupting the system memory.

“Avada Kedavera”

Given the specific, targeted nature of the FBI’s attack it is likely that the agency’s contractors used a 0-day exploit to target the phone

However the FBI’s own commentary on betrays another vector they likely used to compromise Farook’s phone. According to the an anonymous source from the FBI:

“A law enforcement official, speaking to reporters on condition of anonymity, would not reveal how it pulled off this hack. He said it was “premature” to say whether this method works on other devices — he could only confirm that it worked on this particular phone, an iPhone 5C running a version of iOS 9 software.”

Because of the focused nature of both the device make and the operating system, this hints at another more devious attack: an exploit written to take advantage of a software vulnerability in Farook’s phone.

Unlike memory attacks, which can theoretically be employed on a large set of phones and are not OS-specific, a software vulnerability in iOS that could expose data in a way to circumvent the iPhone’s encryption would likely be focused on a specific version of iOS (and possibly a specific model of handset). This seems to coincide with the FBI’s commentary.

If the FBI’s contractors did exploit a vulnerability to access the iPhone’s data, it opens up a Pandora’s Box of problems. First there is no known vulnerability to circumvent cryptographic protections for iOS in software. This means that whatever attack the FBI’s team used was a 0-Day Exploit, a piece of software that exploits a vector unknown to the manufacturer and the public that is likely impacting other systems in the wild.

0-days are usually acquired through one of two means: you either discover a vulnerability in the wild or you buy one on the black market. The former approach is much harder to pull off because it requires a seasoned security research team. Given that Apple already dedicates a significant amount of resources towards its own research team to explore such exploits, to organically discover a 0-day requires either an immense amount of brainpower or an immense amount of luck.

Dark net marketplaces like InjectOr conduct billions of dollars in sales of software vulnerabilities and exploits every year

The latter approach, buying a 0-day off the black market, is frighteningly common. The black market for 0-Day Exploits conducts billions of dollars of business annually, with dubious organizations such as organized crime groups like the Russian Mob being typical buyers and sellers.

Recently though private companies have started brokering straw purchases of 0-day vulnerabilities for government hacking organizations. This was popularized by the Italian company HackingTeam, which was shown after a major data breach to be powering surveilance and investigation software by purchasing 0-day vulnerabilities off of popular dark net marketplaces like Injector (seen above). HackingTeam’s clients included the FBI, as well as non-governmental organizations like Deutsche Bank.

It is likely that the FBI’s anonymous “contractor” purchased a 0-day exploit against the OS version and model of Farook’s phone, and charged the FBI for time and services to extract data from Farook’s phone. While this is a single targeted case, it makes a model for future business from this shadow contractor, and likely encouraged the bureau to cease further efforts of establishing legal precedent to gun after Apple out of concern that their contractor and their dubious methods would become public.

What happened when I reported vulnerabilities to the SEC. The FBI hack of the iPhone jeopardizes the cordial working relationship between the security research community, Silicon Valley, and the US federal government on vulnerability disclosure and remediation.

There are some serious ramifications to the FBI’s decision to stop looking at legal means for surveillance and start exploiting illegal software vulnerabilities.

While the FBI has been known to employ hacking techniques before in investigations (most notable probably being their attack on FreedomHosting to take down child predator rings on the dark net) the use of 0-Day vulnerabilities to avoid dealing with legal proceedings seriously impacts the working relationship of the US federal government with tech companies.

Over the last decade the US federal government has established a very effective and congenial working relationship between themselves, domestic security researchers, and Silicon Valley tech companies on vulnerability discovery and remediation. Through organizations like US-CERT and working groups like FS-ISAC, the feds have done a great job of establishing a pipeline for researchers to discover critical vulnerabilities and companies building systems used by the federal government to remediate those vulnerabilities efficiently and quickly.

For example, when I was at AlienVault I discovered a vulnerability in a critical system within the SEC. I contacted the SEC (who was vulnerable) and US-CERT (who would report the vulnerability if it turned out it happened to impact more systems beyond the one I investigated) to report the vulnerability, and in a reasonably timely period of time both organizations received and responded to my submission. Everything was surprisingly quick and easy.

With the FBI now exploiting 0-day vulnerabilities to avoid judicial review in their investigations, the incentive for Silicon Valley and the infosec research community to work with them on vulnerability response basically disappears.

If I’m an infosec researcher, what incentive do I have to share what I’ve discovered with the feds knowing that the first thing they may do with it is conduct warrantless wiretapping? If I’m a company like Apple, why should I spend cycles collaborating with the federal government on remediation if I know that they’re some of the hackers actively trying to break into my systems using 0-day vulnerabilities?

Hacking the iPhone without judicial review certainly provokes questions about the role of government and privacy. But the way that the FBI has likely gone about such actions has seriously jeopardized the bureau’s relationship with the infosec community — a relationship that has helped to keep critical infrastructure in both the private sector and the government reasonably secure over the past few decades.

In a world where criminal hackers are becoming more sophisticated and better armed, the last thing we need to do is handicap how we work together to solve problems.

*: Depending on the iPhone’s stack, the variable location may change with such a velocity that finding it on the device given the current constraints of visibility they FBI may have is impossible.

Principal PM for Cryptography and Security Products @HashiCorp. Formerly Defense/NatSec & Crypto @NetApp, VC @GGVCapital + @AmplifyPartners

Principal PM for Cryptography and Security Products @HashiCorp. Formerly Defense/NatSec & Crypto @NetApp, VC @GGVCapital + @AmplifyPartners