If you’ve bothered to look at Twitter or any technology news source, you’ve seen that Apple made a major announcement: Expanded Protections for Children. This has been written about by countless outlets, so I’ll assume you’re familiar with the basics.
The announcement covered a few new features being added to the next version of Apple’s operating systems, namely:
- Scanning of inbound and outbound messages for sexually explicit images.
- Scanning images being uploaded to iCloud for CSAM.
- Guidance and warnings in Siri & Search.
The changes to Siri and Search are simple and straightforward, with no notable privacy or security impact, so there’s no need to discuss these here. The changes to Messages to scan for sexually explicit images could be a powerful tool against awful abuses, yet could enable some abuses when misused (especially in abusive relationships). That said, others have explained this in detail, so there’s no need to go into it. Scanning for CSAM on your device though, this has Privacy Twitter in an uproar and has some interesting implications.
Effectively all major online services and cloud storage provides scan for known CSAM, including Facebook, Snapchat, Twitter, Reddit, Discord, Dropbox, OneDrive, Google services, and many others. Google, Microsoft, Cloudflare all offer services that scan files to detect known CSAM. It is effectively the default position and should be assumed: if you store data on an online service, it’s scanned for CSAM.
This gets a bit more complicated when you introduce End-to-End encryption — a service can’t scan files they can’t decrypt. For example, WhatsApp states that they do scan unencrypted images for known CSAM, though they can’t scan encrypted messages - I should also note that WhatsApp has made it clear they won’t adopt Apple’s technique:
I read the information Apple put out yesterday and I'm concerned. I think this is the wrong approach and a setback for people's privacy all over the world.
— Will Cathcart (@wcathcart) August 6, 2021
People have asked if we'll adopt this system for WhatsApp. The answer is no.
Telegram, the kinda-end-to-end encrypted messaging app was briefly removed from Apple’s AppStore in 2018 for distribution of CSAM. Numerous groups have been identified on Telegram and similar apps dedicated to sharing CSAM, and there’s no shortage of indictments for people charged for this activity.
In an environment like this, where encryption prevents the scanning techniques used elsewhere, identifying and preventing the spread of this horrible material becomes a far more complex challenge. Matthew Green has written about the challenge of scanning in an E2E environment, in an excellent post that I recommend reading.
To understand what Apple has done, it’s essential to understand some basics of CSAM scanning. While this is a high-level overview and skips many details, it should be enough to gain insight into the design and decisions made in Apple’s new system.
Images (and videos) are hashed, but not with what we think of as a hash in cryptography, but a hash that represents the visual content instead of the individual bits that make it up. Using the term loosely, they could be considered perceptual hashes; what matters is what a person would see, not the exact details of the image. This ensures that the hashing algorithm can still detect the match if an image is cropped, the colors changed, re-encoded, or otherwise tweaked.
There are a few of these hashing algorithms, though the most common and best known is PhotoDNA, developed by Microsoft. While it’s impossible to be sure because details of PhotoDNA are secret, it’s likely that if the algorithm were published and analyzed, it would be brittle and subject to manipulation.
The NCMEC maintains a list of PhotoDNA hashes that are made available to trusted partners for use in their scanning; these hashes are also highly protected and not available for public inspection (which is to say that NCMEC has to be trusted to not add hashes that shouldn’t be included, for example, political material). This secrecy is to prevent criminals from determining which images are being scanned for and which have not yet been catalogued by NCMEC or its partners. As revealing detection status can provide criminals with an advantage, most systems are designed to reveal as little as possible to the user that uploads an image.
While the maxim is to not rely on security by obscurity, CSAM scanning is traditionally highly dependent on obscurity. Dropbox, for example, refuses to reveal anything meaningful about how they scan the content their customers’ uploads — though they frequently work with NCMEC and law enforcement, as evidenced by court filings. Details remaining scarce is the norm.
Apple helpfully published a 12-page “technical” document (PDF) that outlines how the system works and provides some useful insight. While I’m glad that they provided this insight, in many ways, it fails to go into sufficient detail for experts to understand the system confidently. In a system like this, details are critical, and some are either under-defined or not defined in a meaningful way at all. We shall proceed nonetheless, and look at this new system, and do our best to understand how it works.
The scanning system is split between the user’s device and Apple’s servers, using a novel hashing algorithm, and a complex new extension of private set intersection (known as ftPSI-AD, or Fuzzy Threshold Private Set Intersection with Associated Data) to allow Apple’s service to determine if a match exists, and then decrypt certain data once enough matches are found.
The first new component of this system is a novel machine learning based perceptual hashing algorithm that creates a hash of an image based on the visual elements of the image, and is robust against alterations. Unlike PhotoDNA, which produces a similar hash for images that have been altered, NeuralHash creates an identical hash, simplifying the matching process and enabling match detection via secure multi-party computation, which is leveraged in Apple’s system.
Apple’s definition of the algorithm is fairly minimal, and no useful additional material is provided to allow meaningful analysis.
The false-positive rate isn’t provided, so it’s impossible to understand just how often two unrelated images could result in the same hash — triggering a match that doesn’t exist. Given that NeuralHash is based on machine learning, it would have been most helpful for them to have explained how it can fail. Similarly, no information was provided about false negatives, where related images do not result in a matching hash. Combined, these omissions make it difficult to reason about the effectiveness of the algorithm.
More worrisome is the lack of any mention of adversarial research that attempts to exploit the algorithm to create false results. For example, is it possible to generate a specially crafted image that is falsely recognized as a match, when in fact, it’s not? There has been a great deal of research into how machine learning can be tricked to result in erroneous results, especially when it comes to images; the fact that this wasn’t addressed should make everyone question the safety of using NeuralHash.
Apple has a database of known CSAM hashes, produced in cooperation with NCMEC and others, which are then “blinded” and provided for download to devices. This blinding process effectively encrypts the hashes using a secret key known only to Apple. By blinding the hashes, it’s impossible for device owners or third parties to learn what images are or aren’t included in the known CSAM list.
It should be noted that Apple’s design does not appear to contain any provision for ensuring that every user receives the same list of hashes. This means that users in different regions could receive different lists, or even specific users could receive a different list. The odds of the system being abused in this way seem to be pretty remote though, as there are almost certainly more effective ways of targeting groups. In addition, Apple has stated that there is a manual review process in place before NCMEC or law enforcement is notified. Leveraging this flaw to target groups, such as political dissidents, would require not only adding new hashes to the database, but circumventing the manual review process. This manual review process may represent an important mitigating control that substantially reduces the value of compromising the process of adding new hashes to the database.
These blinded hashes are inserted into a database on a user’s device in a specific order, based on the original unblinded value — the implementation of private set intersection used in the system depends on the correct ordering of items in the database. Put another way, the location in the database is a function of the original hash value, and this location is used to establish a match (this is actually a bit of a simplification; they are using a Cuckoo table).
When an image is to be uploaded to iCloud, a “Safety Voucher” is created, which is the point where most of the magic happens. The Safety Voucher consists of three primary parts:
- Header
- Outer Envelope (encrypted with a key derived from NeuralHash)
- Inner Envelope (encrypted with device key)
First, we need to take a step back to when you set up a new device. During the device setup process, an encryption key is generated and securely stored; this key isn’t known to Apple, only to the user’s device. This key is used for the implementation of Threshold Secret Sharing that is used to ensure that there has to be at least a certain number of matches before data can be decrypted. Based on the threshold that Apple has set, they claim a 1 in 1 trillion odds of a false positive leading to decryption and review, though they didn’t reveal the figures that went into that calculation.
For Threshold Secret Sharing, they are using Shamir Secret Sharing, and generating each share deterministically from a unique identifier associated with each image — this ensures that a given image always produces the same key share (x
coordinate generated as F(fkey, id)
). While Apple doesn’t have direct access to this device secret, it is shared with other devices owned by the same user; this prevents a single image stored on multiple devices from resulting in multiple key shares being generated for the same image. Unfortunately, the threshold for secret sharing does not appear to be publicly documented.
Back to creating a Safety Voucher.
The first step in the process is to use NeuralHash to generate a hash of the image; this is first used to generate the Header; it is then used to lookup an entry in the Blinded Hash database. This is where the ordering of that database becomes important; since the hashes are blinded, it’s impossible to know if there’s an exact match, so the hash is used to determine a location in the database, and that blinded hash is used in the following steps. The device has no way of knowing if that hash is a match or not.
The Header (which was derived from the NeuralHash of the user’s image) and the selected blinded hash are used to derive a new encryption key. This key will be used to encrypt the Outer Envelope.
A new key share for the device is generated, requiring threshold+1
shares to recover the original device key. This key share will be included in the Outer Envelope, along with the Inner Envelope.
The Inner Envelope now needs to be created; it’s encrypted using the device key and includes a visual representation of the photo and other information.
This data is then included with the photo when uploaded.
When a Safety Voucher is received, the server will first attempt to decrypt the Outer Envelope. The server uses the Header and the server-side blinding secret to derive an encryption key; if there is a match, the key will match the key generated by the device (which used the blinded hash to derive the key), and the Outer Envelope will be decrypted. If there isn’t a match, the encryption key derived will be different from what was created on the device, and the decryption of the Outer Envelope will fail.
When the server decrypts the Outer Envelope, it gains a share of the device secret and the still encrypted Inner Envelope.
If the server has at least threshold+1
key shares, it can then combine them, and thus recover the device key. At this point, it can decrypt all of the Inner Envelopes it has access to. However, it should be noted that the server only can access those Inner Envelopes that matched a known CSAM hash, as otherwise, it’s unable to decrypt the Outer Envelope. So even once the device key is recovered, other images are still inaccessible.
While this design requires that at least threshold+1
matches exist before the device key can be recovered and the Inner Envelopes decrypted, it does result in Apple knowing who has matches, even though it can’t see what the matches are for. Given the likelihood of false positives, this could result in users being falsely flagged for processing CSAM.
To prevent this, Apple added an additional complication to the system - Synthetic Match Vouchers. Instead of producing a real Safety Voucher, the device will randomly create a Synthetic Match Voucher, which is designed to match an entry in the Blinded Hash database.
The Header and the Outer Envelope encryption key are chosen such that they will result in a match to a randomly selected item in the blinded hash database. The Inner Envelope contains only null bytes (0x00
) and is encrypted with a random key; the key share included in the Outer Envelope is also random, and thus doesn’t contribute to recovering the device key.
This process adds uncertainty, and makes it impossible for Apple to determine what users have actual matches versus synthetic matches. Therefore, this feature is a significant privacy protection.
You may be thinking, as I did, how Apple efficiently recovers the device key once there are enough real matches. There is a detailed cryptographic analysis (credited to Dan Boneh and others) which provides the answer: a detectable hash function (see section 3.1). This document is a bit on the dense side, though it is well worth the time to read it, as it provides insight not included elsewhere.
Once the threshold has been exceeded, the device key is recovered, Inner Envelopes decrypted. Apple has stated that they will perform a manual review of the images exposed, disable the account, and alert NCMEC and/or law enforcement if CSAM is discovered.
You may be wondering why Apple includes this manual step of reviewing images before they are reported; the answer is U.S. v Ackerman. In this case, it was found that NCMEC is effectively a government actor due to the power that Congress has granted them. As a result, if NCMEC reviews a file, it is considered a 4th Amendment search; however, if Apple views the file and informs NCMEC of the content (conducting a private search that isn’t covered by the 4th Amendment), then NCMEC is free to view the file to confirm the accuracy of the report.
By manually reviewing the content prior to reporting, the search isn’t considered to be a violation of constitutional rights in the U.S., and thus can be used as evidence in court.
Scanning iCloud images is likely just the first step in deploying this system. It could easily be deployed for Messages, and even images that never leave the user’s device.
Based on how the system is designed, there doesn’t appear to be any need for the full image to be uploaded, only the Safety Voucher. Based on this design choice, it’s logical to conclude that the intention is to move beyond just iCloud into other areas.
Before moving on, I’d like to take a moment to comment on the documentation that Apple provided.
The technical summary is less detailed than I’d like, but it does provide quite a bit of helpful information. It’s not perfect, but better than nothing. The cryptographic analysis is excellent, and vital to get a better understanding of how the system actually works — a number of details listed above were only identified thanks to this document.
But there are other documents, such as the “analysis” from computer vision researcher David Forsyth; when I saw that it was from an expert in computer vision, I was hopeful that it would include a detailed technical analysis of NeuralHash; instead, it’s little more than the thumbs-up emoji expanded to as many words as possible. It’s almost worth reading to appreciate the uselessness of the document. Unfortunately, the other Technical Assessment documents are little better.
I’m disappointed that Apple didn’t provide a deeper analysis of critical portions of the system; however, they have offered more insight than most do. If it weren’t for the details they’ve provided, it would be impossible to have the robust conversation going on now. So while I have complaints about the documentation, I commend them for sharing as much as they did.
Many concerns have been raised; some are less likely to be an issue, and some represent genuine issues.
One of the most common is that the feature could target political dissidents, minorities, and others. I’ve discussed this some above already, though there’s more to be said. If Apple triggered a report automatically once the threshold was exceeded, I would be far more worried about this; instead, there’s a manual review process, making it far less likely that it could be used effectively (without witting cooperation from Apple). That said, in 2016 Facebook and others started using PhotoDNA to target terrorist content; while everyone would agree that terrorists should be stopped as well, the definition of terrorist varies, and in some places religious minorities could be considered terrorists, or opposing political groups. There is a clear precedent for using this technology beyond CSAM; when it’s deployed beyond the U.S., what else will Apple be asked to look for?
A nation-level police force provides Apple additional hashes that are then added to the NeuralHash database; they then obtain a preservation order for any accounts that have a match to that hash and later obtains a warrant for all information related to the user. This concern exposes the risk of law enforcement organizations leveraging laws and courts to bypass the policy controls that Apple has implemented, and using the detection method to target users even before the threshold has been reached and Apple’s manual review process is activated.
NeuralHash has unknown properties, and could be abused to create malicious images that result in false positives or otherwise be abused. Given that there’s no way to determine if a given image matches one of the hashes in Apple’s NeuralHash list, it would be effectively impossible to craft such images. That said, if a flaw is discovered that leaks image status, you can bet people will go out of their way to troll people by setting them up with images that appear to be matches.
Scanning images uploaded to iCloud for known CSAM is unlikely to have a notable impact. In a memo (discussed further below) to Apple employees from Marita Rodriguez, the Executive Director of Strategic Partnerships at NCMEC said, “…I hope you take solace in knowing that because of you many thousands of sexually exploited victimized children will be rescued…” - which sounds great, but is entirely unrealistic. This scanning system only looks for known CSAM that has been reported and added to the hash database; this system targets those collecting and trading CSAM. It’s not targeted to those producing new CSAM. While putting the criminals that traffic in this awful material in prison is a laudable goal, the impact is unlikely to resemble the goals NCMEC has expressed.
There are also countless varieties of the slippery slope argument. If we allow this, where does it end? These arguments sometimes have merit, but aren’t worth mentioning here.
It should come as no surprise that NCMEC has fully endorsed Apple’s solution, as they have been critical of end-to-end encryption and the resulting inability to perform content scanning. In 2019, they wrote:
We oppose privacy measures that fail to address how the internet is used to entice children into sexually exploitive situations and to traffic images and videos of children being raped and sexually abused. NCMEC calls on our tech partners, political leaders, and academic and policy experts to come together to find technological solutions that will enhance consumer privacy while prioritizing child safety.
NCMEC is an organization deeply passionate about its mission, and the comment “…enhance consumer privacy while prioritizing child safety” (emphasis added) speaks volumes about their views around privacy. In fact, there was a most interesting quote from them in a memo provided to Apple employees:
We know that the days to come will be filled with the screeching voices of the minority.
Marita Rodriguez, NCMEC
The privacy and security community has reacted with deep concern, and their reaction is to refer to us as “screeching voices of the minority” — this is not the way to embrace the community that can do the most to build reasonable solutions to the problem they are trying to solve. It is deeply offensive, and an affront to those that understand these are complex issues with needs that are difficult to reconcile.
The fact that NCMEC hasn’t issued an apology and clarification is telling; they are doing little to work with privacy advocates to find solutions that meet these complex challenges, and instead attack and demean. This isn’t a productive approach, and NCMEC leadership should take this as an opportunity to reflect on how they interact with the world and look for ways to build healthy and productive relationships.
I would like to note that while this may be seen as a criticism of NCMEC and their approach to privacy, I fully support their mission, and I have worked in support of it. There are people currently in prison as a direct result of my work on this front.
When you buy something, is it really yours? You paid for it. You decide what to do with it. It’s yours, that simple. But what happens when the device no longer works for you, but is instead actively engaged in a law enforcement effort? That’s exactly what’s happening here, the next generation of Apple’s operating systems work not just at your discretion, but in the interest of the state working every time an image is uploaded to iCloud to further law enforcement.
In the past, the assumption was that unless malware was present, a device worked solely for, and under the control of its owner. This is the first time, in the U.S. at least, that it will be routine for a device to instead be working for the state. So it’s your device, but it’s working for someone else.
This has created a visceral backlash against Apple’s new system, and it represents a tectonic shift in the world of computing. This fact must be remembered when looking at the system and the reactions, as it impacts both sides of the analysis.
We have an unquestionable moral obligation to protect children. Every person has a fundamental right to privacy. These statements are both true, and yet can be at odds. The fight against CSAM frequently brings these two head to head, with solutions to one harming the other.
To say that this is a complex problem is an understatement of truly titanic proportions. How do you stop someone from collecting or trading CSAM while at the same time not infringing on the right to privacy? Some solutions ignore privacy, and subject everyone to invasive practices. Some solutions place privacy above all else, and make countering CSAM nearly impossible. Some try to find some balance, which may or may not be right.
People have spent years of research, countless meetings and workshops, and endless discussions trying to solve this very problem. I’ve yet to see anyone respected claim they’ve found the correct answer.
The correct answer may not even exist; there may not be a proper balance. But awful people still exist, and they do awful things. There’s a moral obligation to protect children. People have a right to privacy. All of these will remain true, and we will continue looking for a solution that strikes the right balance.