WhatsApp won't use Apple's child abuse image scanner

Apple may have a plan and a security feature in the works to stop child sex abuse, but that doesn't necessarily mean everyone is on board.WhatsApp boss Will Cathcart joined Apple critics Friday by stating clearly that the Facebook-owned messaging app will not be adopting the new feature once it launches. Cathcart went on to discuss his concerns about the machine-learning-driven system in an extensive thread.Cathcart said mid-sentence, "This is an Apple-built and operated surveillance system that could very easy be used to scan private contents for anything they decides to control." "Countries that sell iPhones will have different standards for what is acceptable."WhatsApp's position on the feature is obvious, but Cathcart's thread concentrates mostly on hypothetical scenarios that could lead to problems with it. He would like to know how and if the system will be used by China's spyware companies, as well as "what will happen if it is exploited" and whether it is error-proof.It is a thread that appeals to emotions. It's not very helpful for anyone who is trying to find out why Apple's announcement raised eyebrows. Cathcart recites some of the most controversial talking points, but it is far more provocative than informative.Mashable reported Thursday that a piece of the forthcoming security update is based on a proprietary technology called NeuralHash. It scans each image file and hashs a signature. Then it checks it against known Child Sex Abuse Materials. This happens before a photo is stored in iCloud Photos. Apple can't do or look at anything unless the hash checks set off alarms.Of course, the hash check method is susceptible to errors. For one, it won't catch CSAM files that haven't been catalogued in a database. Matthew Green, a cyber security expert and Johns Hopkins University professor, pointed out the possibility of someone using a CSAM file haveh to attack an image file that is not CSAM.The security update also includes a second component. Apple will introduce a parental control function that scans images sent to child accounts via iMessage for explicit material. This is in addition to NeuralHash-powered haveh checks. When Apple's content alarm goes off, parents and guardians will be notified.Shortly after Apple's announcement, the Electronic Frontier Foundation (EFF), released a critical statement about the update. This evidence-supported discredit of the plan offers a better understanding of the issues Cathcart hints at in his thread.It's possible to have a fair discussion about the merits of Apple's plan. WhatsApp has the right to object to this feature and to refuse to use it. You, the user, may not want to form an opinion but want to understand the issue better than a Facebook executive.Apple's explanation of the future is a good place to start. You can also check out some of the supporting links in the EFF response. You don't have to agree with Green and Cathcart, but you will get a better picture if your search goes beyond the 140 characters of Twitter.