Facebook AI Research says it's created a machine learning system for de-identification of individuals in video. Startups like D-ID and a number of previous works have made de-identification technology for still images, but this is the first one that works on video. In initial tests, the method was able to thwart state-of-the-art facial recognition systems.

The AI for automatic video modification doesn't need to be retrained to be applied to each video. It maps a slightly distorted version on a person's face in order to make it difficult for facial recognition technology to identify a person.

"Face recognition can lead to loss of privacy and face replacement technology may be misused to create misleading videos," a paper explaining the approach reads. "Recent world events concerning the advances in, and abuse of face recognition technology invoke the need to understand methods that successfully deal with de-identification. Our contribution is the only one suitable for video, including live video, and presents quality that far surpasses the literature methods."

Facebook's approach pairs an adversarial autoencoder with a classifier network. As part of training of the network, researchers tried to fool facial recognition networks, Facebook AI Research engineer and Tel Aviv University professor Lior Wolf told VentureBeat in a phone interview.

"So the autoencoder is such that it tries to make life harder for the facial recognition network, and it is actually a general technique that can also be used if you want to generate a way to mask somebody's, say, voice or online behavior or any other type of identifiable information that you want to remove," he said.

Like faceswap deepfake software, the AI uses an encoder-decoder architecture to generate both a mask and an image. During training, the person's face is distorted then fed into the network. Then the system generates distorted and undistorted images of a person's face for output that can be embedded into video.

Facebook has no plan to apply the tech to any part of the Facebook family of apps at this time, a company spokesperson told VentureBeat, but such methods could enable public speech that remains recognizable to people but not to AI systems.

Anonymized faces in videos could also be used for the privacy-conscious training of AI systems. In May, Google used Mannequin challenge videos to train AI systems to improve video depth perception systems. Multiple works from UC Berkeley researchers to train AI agents to dance like people or do backflips also used YouTube videos as a training data set.

The work will be presented at the International Conference on Computer Vision ( ICCV) being held next week in Seoul, South Korea.

The news follows an announcement earlier this week by Facebook CTO Mike Schroepfer that a Deepfakes Challenge preview data set is now available and that Amazon's AWS is now a member of the Deepfake Detection Challenge initiative launched last month by Facebook and Microsoft. The challenge was created to improve the robustness of deepfake detection systems.

In addition to the altruistic works mentioned above, Facebook's desire to be a leader in the area may stem from controversy about its platforms being used to spread misinformation and its own applications of facial recognition technology.

Facebook made facial recognition a default on its platform earlier this year, and is currently battling a $35 billion facial recognition lawsuit. This week the social network also launched a News app for some users in the United States.

tag