Reputable university researchers sound the alarm over Apple's plans for scanning iPhone users' photo libraries to find CSAM (child sexual abuse material), calling it "dangerous."
Jonanath Mayer (assistant professor of computer science at Princeton University) and Anunay Kalshrestha (researcher at Princeton University Center for Information Technology Policy), wrote an op-ed for The Washington Post. They shared their experiences in building image detection technology.
Two years ago, the researchers began a project to identify CSAM in end–to–end encrypted online services. Researchers note that they are experts in their field and "know the importance of end-to–end encryption which protects data against third-party access." They are horrified by the fact that CSAM is "proliferating" on encrypted platforms.
Kulshrestha and Mayer said that they were looking for a compromise to the situation. They wanted to create a system online that could be used to find CSAMs and protect end-to–end encryption. Although experts in the field were skeptical about the possibility of such a system being built, the researchers did build it and discovered a serious problem.
We explored a middle ground where online services could detect harmful content and preserve end-to-end encryption. It was simple: The service would be alerted if someone shared material that matches a list of known harmful content. The service wouldn't be notified if someone shared content that was innocent. Since that information could help criminals avoid detection and reveal law enforcement methods, people couldn't access the database to see if content was matched. A system similar to ours would have been impossible, according to knowledgeable observers. After many failed attempts, we finally created a functional prototype. We ran into a major problem.
Apple has received a lot of questions since the announcement of the feature. Many have raised concerns that the system used to detect CSAM could also be used to detect other types of photos, at the request of oppressive countries. Apple strongly refuted any suggestion that such a possibility exists, stating it would not accept such requests from governments.
However, there are many concerns about the future implications of the technology used to detect CSAM. Kulshrestha and Mayer said they were "disturbed" by their concerns about how governments could use the system for detection of content other than CSAM.
For example, a foreign government could compel the removal of disfavored political speech. It's not a hypothetical scenario: WeChat, a popular Chinese messaging app uses content matching to identify dissident materials. India passed rules in this year which could require the pre-screening of content that is critical of government policies. Russia has recently penalized Google, Facebook, and Twitter for failing to remove pro-democracy protest material. Other flaws were also identified. We also noticed other flaws in the content-matching process. Malicious users could use it to expose innocent users to scrutiny. We were so concerned that we decided to take a new step in computer science literature. We warned against our system design and encouraged further research into how to minimize the serious side effects .
Apple continues to address concerns from users by publishing additional documents and a FAQ page. Apple believes that the CSAM detection system on a user's device is consistent with its long-standing privacy values.