Last week, Apple announced three initiatives to protect children from sexual abuse. At least two of them have privacy implications, which has led some, such as the Electronic Frontier Foundation (EFF), to oppose Apple's actions. Additionally, many folks raised good questions, which led Apple to release a Frequently Asked Questions (PDF link) a few days later to provide more detailed information. This is a complicated issue, and reasonable minds can differ on whether Apple is doing the right thing. Here is what you should know.
Siri and Search
Let's start with the least controversial new initiative. Apple is updating Siri and Search to provide parents and children with information about unsafe situations related to CSAM. CSAM stands for Child Sexual Abuse Material. People used to simply call this "child pornography," but "CSAM" has been increasingly used in the last 20 years to emphasize that the key is not that this information is pornographic but instead that it is evidence of child sexual abuse. With this initiative, people will be able to ask Siri about how to report child exploitation, and Siri will point to helpful resources.
These improvements seem like an obviously good idea. They will be part of an update to Apple's operating systems later this year. Of course, if this was the only change, I wouldn't be writing a blog post about this topic. It gets more complicated.
Sexually explicit photos in Messages
The second initiative takes place in the Messages app and is based on technology that Apple developed for the Photos app. The Photos app already has the ability to look at the images on your device and try to understand what is in the image using artificial intelligence (AI). For example, if you take a picture of your dog, your iPhone will often understand that there is a dog in the picture. Use the Search feature in the Photos app to search for "dog" and the iPhone (or iPad or Mac) will show you pictures that the iPhone thinks contain a picture of a dog, even if you yourself never tagged the picture as containing a dog. To ensure privacy, your pictures are analyzed on your device — not on some Apple server — so the AI occurs without Apple seeing your photos. As this feature has improved over time, there is now a big and growing list of words that you can use to search in the Photos app.
Later this year, Apple will add the ability for your iPhone (and iPad and Mac) to tell that there is sexually explicit content in a photograph. Apple has not defined what it means by "sexually explicit" content. For example, is it all nudity or something else? And of course, not all nudity is CSAM. It may not include a child at all. Or it may include a child in a context that is not abusive, such as a parent's picture of his or her child in a bathtub.
But whatever it is that this AI can search for when it looks for sexually explicit content, it only occurs in a very specific context. Unlike the "dog" example I noted above, this does not occur at all in the Photos app. It only occurs in the Messages app. And it doesn't occur all the time in the Messages app, but instead only if you turn it on. And even then, it can only be turned on in a very specific circumstance: when you have an account configured as a family in iCloud and when there are children — people under the age of 18 — on the account.
If you are a parent and you have an iCloud family account, then this Fall, you will be able to turn on a new communications safety feature in Messages for the children in your family. The system works in two different ways.
First, a parent can turn on this feature for a child account that is age 12 or younger. With this function turned on, if the Messages app on the child's device receives or tries to send a sexually explicit image, then the image will initially be blurred or otherwise obscured.
Before the child can view the message, the child will see a warning message. In the example provided by Apple, the message was written in terms that are appropriate for and understandable by a child. For example, it explains that the image shows "private body parts that you cover with bathing suits." Additionally, parents have the option to turn on parental notifications so that if the 12-or-under child decides to look at the image notwithstanding the warning, the parent will be alerted. My understanding is that the parent will also see the picture in question. The child is told that an alert will go to their parents:
If a child is a teenager — age 13 to age 17 — then the system works the same except that there is no option for parental notification. A parent can still turn on the feature so that the teen receives the warning, but if the teen decides to look at the picture anyway, the parent will not be alerted.
Why is there no parental alert feature for teens? Privacy. For example, the type of sexually explicit image that a teenager is viewing may reveal private information about the teenager, such as if the teenager is exploring their own sexual orientation. For some families, revealing that information to a parent may actually result in a negative reaction such as, in some cases, child abuse by the parent.
Because this AI all takes place in the Messages app on the device, Apple itself does not get access to the messages or images. Apple is never alerted — only the parent(s). As Apple says in its FAQ: "None of the communications, image evaluations, interventions, or notifications are available to Apple." Moreover, this feature does not change the end-to-end encryption in Messages.
CSAM detection
The third new initiative is CSAM detection. This feature has nothing to do with the AI feature that I just described, and it works in a completely different fashion.
To try to combat child abuse in the United States, in 1984, the federal government established and continues to fund a private, non-profit organization called the National Center for Missing and Exploited Children (NCMEC). NCMEC serves as the national clearinghouse and to provide a coordinated, national response to problems relating to missing and exploited children. NCMEC is one of the few entities in the United States that has legal permission to store a collection of CSAM.
Using incredibly sophisticated technology called NeuralHash, NCMEC has systems that analyze each instance of CSAM and create a unique CSAM hash. In other words, each picture in NCMEC's CSAM database is associated with a single unique CSAM hash. NeuralHash is sophisticated enough that the hash remains the same even if the image is cropped or altered in minor ways using common techniques, such as changing a color image to black and white. Apple has developed software for your iPhone that can similarly analyze a picture and determine its hash. If the hash associated with the image on your iPhone matches the CSAM hash provided by NCMEC, then there is an incredibly high likelihood that the image on your iPhone is CSAM.
When Apple unveils its CSAM detection feature this Fall, all iPhones will have the ability to analyze images stored in the Photos app and determine the hashes. Also, all iPhones will contain a list of the CSAM hashes provided by NCMEC. If you choose not to use iCloud to upload your images to Apple's server, then no CSAM detection takes place. But if you do choose to do so — such as to backup your iPhone, or to use iCloud photo sharing so that the same image appears on your iPad, Mac, etc. — then Apple will first determine whether each image matches a NCMEC CSAM hash. If yes, then a red flag goes up. And if a certain number of red flags go up — Apple hasn't revealed what that number is — then Apple gets a notification that you appear to be uploading CSAM images to iCloud. [UPDATE 8/13/2021: Joanna Stern of the Wall Street Journal interviewed Apple's software chief Craig Federighi, and he stated that the number is around 30.] Apple says that the risk of this notification being a false positive is only 1 in 1 trillion.
When this notification goes to Apple, Apple next performs a manual review. An Apple employee is provided with a low-resolution version of the images sought to be uploaded to iCloud so that those images can be compared with a low-resolution version of the NCMEC CSAM images. If that employee confirms that there is a positive match, then Apple sends a report to NCMEC and disables the user's account. The person is notified and can appeal to have their account reinstated. I presume that NCMEC, upon agreeing that there is a match, alerts the appropriate law enforcement authorities.
Unlike the Messages system that can be turned on for those under the age of 18 that I described above, the CSAM detection system never performs its own analysis of the pictures on your iPhone. Thus, to use the example I gave above, if you take a picture of your child in the bathtub, that will not trigger this system. This system is triggered only for images that are already in the NCMEC database, and the analysis occurs on your iPhone — not based on the content of an image as determined by AI but instead based on the hash that is associated with the image.
The controversy
I've seen people both praise and criticize these new initiatives. On all sides of the issue, I've heard reasonable arguments.
Some folks believe that what Apple is doing doesn't go far enough. These new initiatives are focused on two apps: Photos, and only when iCloud uploading is turned on, and Messages, and only for children when their parents turn on the feature. Thus, Apple isn't doing anything about images that you view on the Internet using Safari, images that children send or receive using an app other than Messages, photos stored in an app other than Photos, etc. If Apple wanted to do more, they certainly could. The problem, of course, is that these additional efforts would erode legitimate privacy concerns.
Some folks believe that what Apple is doing goes too far. First, I have heard concerns about false positives. For the CSAM detection feature, it seems to me that Apple has done a lot to vastly reduce the risk of false positives. For the AI detection in Messages for children, nobody outside of Apple knows how well it works.
Second, I've heard concerns about teen privacy, arguments that a 17-year old ought to be able to send or receive consensual nude images without receiving warnings, even if their parents want to turn on that feature.
The most common complaint that I've heard is the slippery slope argument. Once Apple has a system in place in the United States for using CSAM hashes provided by NCMEC, will they do the same in other countries, including more authoritarian countries? What if that country's database includes not only CSAM but other images that the government finds objectionable, such as the famous picture of the man standing in front of the tank in Tiananmen Square in 1989, an image censored by the government in China? Could a country opposed to the rights of those in the LGBTQ+ community exploit this technology? Could an oppressive regime force Apple to flag pictures of human rights activists?
Apple's has a simple answer to that concern: it has refused similar efforts in the past, and it will continue to say no in the future. Here is what Apple says in the FAQ:
Could governments force Apple to add non-CSAM images to the hash list?
Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
[UPDATE 8/13/2021: In another Wall Street Journal article by Joanna Stern and Tim Higgins, Federighi stated that the database of images that Apple is working with comes not just from NCMEC but also from other child-safety organization, including two that are in "distinct jurisdictions," which I presume means countries outside of the United States. Federighi also pointed out that auditors will be able to verify that the database consists of only images provided by those entities.]
So ultimately it all comes down to trust. Do you trust Apple to live up to its promise? And if there is a country that makes it illegal or otherwise impossible for Apple to say no even when Apple wants to do so, will Apple leave that country so that it can keep its promise?
In an article for Macworld, Jason Snell provides an excellent overview and then concludes that he is concerned. "Even if Apple’s heart is in the right place, my confidence that its philosophy will be able to withstand the future desires of law enforcement agencies and authoritarian governments is not as high as I want it to be. We can all be against CSAM and admire the clever way Apple has tried to balance these two conflicting needs, while still being worried about what it means for the future."
I respect that opinion and the concerns of others who don't think that Apple can keep its promise, but based on the research and analysis I have done so far, I disagree. In fact, I don't think that this new CSAM detection feature is much of a step forward on a slippery slope. Any government already has the ability to create laws about what Apple and every other company operating in that country can do. For some laws, Apple and other companies have chosen to comply after performing a risk-benefit analysis. But for other laws and requirements, I believe that Apple and other companies would say no or would leave — and the risk of them leaving and the associated economic impact in that country would, hopefully, convince such a government in that country to back down. In China, for example, the country has a very different view on human rights than democratic regimes, but the country sees a big economic benefit from Apple producing iPhones in China. In other countries, there are many fans of Apple products, and they may help cooler minds to prevail. I'm not saying that any of this is easy, but I just don't think that it is anything new. These thorny issues have existed and will continue to exist, regardless of Apple's new features. I don't consider this a reason for Apple to avoid doing something good about CSAM while implementing it in a way that respects privacy.
Additionally, providing information to NCMEC is nothing new. As the NCMEC website explains, electronic service providers in the United States — companies like Apple, Facebook, and Google — are not required by federal law to search for CSAM, but if they become aware of it on their systems, they are required to report it to NCMEC. See 18 USC § 2258A. Over 1,400 companies are currently registered with NCMEC to make these reports. According to NCMEC (PDF report), in the year 2020, Facebook made 20,307,216 of these reports, Google made 546,704, and Apple made only 265. I'm sure that Apple made so few because it hasn't tried to find them in the past, so those 265 cases are probably cases in which U.S. law enforcement first reached out to Apple during an investigation. Facebook, on the other hand, is providing a huge number of images to NCMEC, and unlike Apple, they haven't done so in a way that images are analyzed on your own device instead of their server to protect the privacy of images that do not match a NCMEC hash.
Moreover, by implementing this system now, Apple will have a way in the future to expand the encryption of information stored on its servers without people being able to complain that Apple's encryption makes it impossible to tell if Apple is hosting CSAM. With this system, Apple has a way to check images before they are uploaded without Apple itself actually looking at the content in any of your images. Yes, it does mean that we have to trust Apple not to fall down a slippery slope, but that is nothing new. Any time you decide to involve another person, organization, or company in your life, you are making a decision based upon how much you trust them.
These are not easy issues. Apple and other technology companies are bound to make mistakes, and hopefully they will learn from those mistakes and evolve over time. As they do so, they will face criticism from folks saying they have gone too far and from folks saying they have not gone far enough. Every customer can decide whether they can live with the decisions that Apple and others have made — and if not, they can take their business elsewhere. Hopefully, this post helps you to make your own decisions when these features are implemented in a few months.