Clearview AI created controversy by using facial recognition and scraping the internet for photos to allow police and other agencies unprecedented access to our lives. Clearview's CEO now wants artificial intelligence to enhance its surveillance tool.
This could make it more risky and more error-prone.
Clearview has gathered billions of photos across various websites, including Instagram, Twitter, and Facebook. Clearview uses AI to identify individuals in images. The company's face database has been used by police and government agents to identify suspects in photos. They also tie them to online profiles.
Hoan Ton-That is Clearview's cofounder, and the company's CEO. He tells WIRED Clearview has more than 10,000,000 images from all over the internet, which is three times more than previously reported.
Ton-That claims that users are more likely, most often by law enforcement, to find a match in a larger pool of photos. Ton-That also claimed that the company's tool is more accurate because of the larger data set.
Clearview combined web-crawling with advances in machine learning, which have improved facial recognition, and an disregard for privacy to create a powerful tool.
Ton-That demonstrated the technology by taking a photograph of the reporter using a smartphone app. The app generated dozens of images from various US and international websites. Each image showed the correct person in images taken over more than a decade. It is easy to see the appeal of this tool, but it is also possible that it could be misused.
Clearviews actions have sparked outrage from the public and a wider debate about privacy expectations in this era of smartphones and AI. Critics claim that the company is violating privacy. Clearview was sued by the ACLU in Illinois for violating a law restricting the collection biometric data. The company is also facing class action lawsuits in New York City and California. Clearview must stop scraping the sites of Facebook and Twitter.
Ton-That has not been discouraged by the opposition. Ton-That believes that most people support facial recognition as a way to solve crime. He says that those who are concerned about it are vocal and that is a good thing. I believe we can eventually address more of their concerns.
Clearview's latest technologies could spark more debate. Ton-That claims it is working on new methods for police to locate a person. The first uses blurred images to sharpen them using machine-learning to imagine what a clearer image would look like. The second attempts to visualize the face of the person using machine learning models that fill out missing details using statistical patterns from other images.
Clearviews technology could be more appealing, but also more difficult to use. Although it is not clear how accurate the new methods are, experts warn that they could increase the likelihood of a wrongly identified person and may exacerbate the inherent biases in the system.
Uncontrolled data sets and training processes can lead to unintended biases. Professor Aleksander Madry, MIT
Aleksander Madry, a professor at MIT and expert in machine learning, said that I would expect inaccurate results. For example, people with certain characteristics might be more likely to get wrongly identified if the approach is not taken care of.
Madry believes that even if technology does work as promised, it is still problematic to unmask people. He suggests that people have masked themselves in order to participate in peaceful protests or were blurred to protect privacy.
Ton-That claims that Clearviews' results are more accurate after tests. He advises that enhanced images be noted and extra care taken to evaluate any results.