Apple's Proposed Phone-Scanning Child Safety Features 'Invasive, Ineffective, and Dangerous,' Say Cybersecurity Researchers in New Study

On Thursday, more than a dozen cybersecurity experts criticized Apple for using "dangerous technology” in its controversial plan (via The New York Times) to detect child sexual abuse images on iPhones.


Researchers have issued a 46-page critique of Apple's plans to monitor phones for illegal material. They called them ineffective and dangerous strategies that could encourage government surveillance.

The August announcement revealed that the features planned include client-side (i.e. On-device scanning of user's iCloud Photos libraries for Child Sexual Abuse Material, Communication Safety to warn parents and children when they receive or send sexually explicit photos and expanded CSAM guidance through Siri and Search.

Researchers claim that documents from the European Union indicate that its governing body is seeking a similar program to scan encrypted phones for signs of child sexual abuse, as well as terrorist-related imagery and organized crime.

Researchers stated that it should be a national security priority to resist attempts at spying on and influencing law-abiding citizens. They also added that they are publishing their findings now to inform European Union about the dangers associated with the plan.



Ross Anderson, a Professor of Security Engineering at the University of Cambridge who is also a member of this group, stated that "the expansion of surveillance powers of state really is crossing a red line." The researchers stated that, aside from concerns about surveillance, their findings showed that technology was not effective in identifying images of child sexual abuse. They said that people had already suggested ways to avoid detection, such as editing images slightly, within days of Apple's announcement. Susan Landau, a Tufts University professor of cybersecurity policy, said that scanning a private device is allowing for the detection of anything illegal. It's extremely dangerous. It is dangerous for national security, business, and privacy.

According to cybersecurity experts, they began their research before Apple announced it. They are publishing their findings now to warn the European Union about the dangers.

Apple's decision to include the technology in an upcoming update to iOS 15 or iPadOS 15 has been criticized by privacy advocates, security experts, cryptography specialists, academics, politicians, as well as employees.

Apple began by trying to clarify misunderstandings and reassure users. They released detailed information, shared FAQs, created new documents, interviewed company executives and many other things to ease their fears.

Apple acknowledged that the desired effect was not being achieved and delayed the rollout of new features. However, Apple announced in September that it would delay the release of the features to allow the company to make "improvements to the CSAM" system. It's unclear what these improvements would be and how they would address the concerns.

Apple also stated that it would not accept demands from authoritarian governments for the expansion of the image-detection systems beyond images of child sex abuse material flagged in recognized databases. However, it has not indicated that it will pull out of a marketplace rather than comply with a court order.