Illustration by Alex Castro / The Verge

According to a report from The New York Times, a concerned father says that after taking photos of an infection on his toddler's groin, he was flagged as child sexual abuse material by the internet search engine. The company closed his accounts and filed a report with the National Center for Missing and Exploited Children, which highlighted the difficulties of trying to tell the difference between a potential abuse and an innocent photo once it becomes part of a user's digital library.

Concerns about the consequences ofblurring the lines for what should be considered private were raised when Apple announced its Child Safety plan. As part of the plan, Apple would take pictures on Apple devices and then match them with the database of known CSAM. If CSAM was found, a human would review the content and lock the user's account.

The accounts were taken away due to content that “might be illegal”

The EFF said that Apple's plan could open a back door to your private life and that it was a decrease in privacy for all iCloud Photos users.

After putting the stored image scanning part on hold, Apple included an optional feature for child accounts in a family sharing plan. The Messages app analyses images and determines if a photo contains nudity, while maintaining the end-to-end encryption of the messages, if parents opt in. If it is able to detect nudity, it blurs the image, displays a warning for the child, and presents them with resources to help with safety online.

Related

Apple’s controversial child protection features, explained

The main incident was highlighted by The New York Times in February of 2021. Mark noticed swelling in his child's genital region and, at the request of a nurse, sent images of the issue to a doctor for a video consultation. Antibiotics were prescribed by the doctor to cure the infection.

According to the NYT, Mark received a notification from Google just two days after taking the photos, stating that his accounts had been locked due to harmful content that may be illegal.

Similar to many internet companies, including Facebook, Reddit, and Microsoft's PhotoDNA, Google scans uploaded images for known CSAM. In 2012 it led to the arrest of a man who was a registered sex predator and used Gmail to send child pornography.

In order to be able to remove CSAM imagery and report it as quickly as possible, the Content SafetyAPI was launched in the year 2000. It uses the tool for its own services and, along with a video-targeting CSAI Matchhash matching solution developed by YouTube engineers, offers it for use by others as well.

We are fighting abuse on our own platforms.

We identify and report CSAM with trained specialist teams and cutting-edge technology, including machine learning classifiers and hash-matching technology, which creates a “hash”, or unique digital fingerprint, for an image or a video so it can be compared with hashes of known CSAM. When we find CSAM, we report it to the National Center for Missing and Exploited Children (NCMEC), which liaises with law enforcement agencies around the world.

When a user takes "affirmative action", which can include backing their pictures up to Google Photos, they will only be scanned for their personal images by the search engine. According to the Times, federal law requires Google to report potential offenders to the CyberTip line. According to the New York Times, Mark's son is on a list of 4,260 potential victims of CSAM.

Mark lost access to his email, contacts, photos, and phone number as a result of using the mobile service of the search engine. Mark tried to appeal the decision but was turned down. The San Francisco Police Department opened an investigation into Mark in December of 2020 and got a hold of all the information he had stored with the internet search engine. The incident did not meet the elements of a crime according to the investigator.

Christa Muldoon said in an email that child sexual abuse material is "abhorrent and we're committed to preventing it on our platforms." In order to remove CSAM from our platforms, we use a combination of technology and artificial intelligence. Our team of child safety experts reviews flagged content to make sure we can identify instances where users may be seeking medical advice.

Critics argue that the practice of scanning a user's photos unnecessarily intrudes on their privacy. Jon Callas, a director of technology projects at the EFF, made a statement to the New York Times. Callas told the NYT that the nightmare they are all concerned about is this one. I'm going to get into trouble because they're going to scans my family album.