The training of artificial intelligence systems that can be used to generate deepfakes has been banned by the company. The updated terms of use include work related to deepfakes.
Colab was spun out of an internal project in late 2017. It is designed to allow anyone to write and execute arbitrary Python code through a web browser. Both free and paying Colab users have access to hardware, including the custom-designed, artificial intelligence-accelerating tensor processing units.
Colab has become the defacto platform for demos within the research community. It's not uncommon for researchers to include links to Colab pages in their code. When it comes to Colab content, the service has not been very restrictive, potentially opening the door for actors who wish to use the service for less scrupulous purposes.
Users of the open source deepfake generator became aware of the terms of use change last week, when they received an error message after attempting to run DeepFaceLab in Colab. You may be able to use Colab in the future, but you may not be able to execute code that is 888-276-5932. The prohibited actions are specified in our FAQ.
Some code does not cause the warning. One of the more popular deepfake Colab projects, FaceSwap, remains fully functional despite the fact that this reporter was able to run it. The onus will be on the Colab community to report code that runs afoul of the new rule.
We monitor avenues for abuse in Colab that run counter to Google's principles, while balancing supporting our mission to give our users access to valuable resources. Last month, deepfakes were added to the list of activities that were not allowed from Colab. We have automated systems that detect and prevent abuse.
Archive.org shows that the Colab terms were updated in May. The previous restrictions on things like running denial-of-service attacks and password cracking were not changed.
One of the most common deepfakes are videos where a person's face is pasted on another person's face. Artificial intelligence-generated deepfakes can match a person's body movements, microexpressions and skin tones better than Hollywood-produced fakes.
Many viral videos have shown that deepfakes can be harmless and even entertaining. They are being used by hackers to target social media users. They have been used in political propaganda to create videos of Ukrainian President Volodymyr Zelenskyy giving a speech that he never actually gave.
According to one source, the number of deepfakes online grew from roughly 14,000 to 145,000 in the year 2019. Forrester Research estimated deepfake fraud scam would cost $250 million by the end of 2020.
When it comes to deepfakes specifically, the issue that's most relevant is an ethical one: dual use. Chlorine has been used as a chemical weapon. We deal with that by first thinking about how bad the tech is, and then we agree on the Geneva Protocol that we won't use chemical weapons on each other. We don't have industry-wide consistent ethical practices regarding machine learning and artificial intelligence, but it makes sense for Google to come up with its own set of conventions regulating the access to and ability to create deepfakes.
Os Keyes, an associate professor at Seattle University, was in favor of the ban on deepfake projects. He noted that more needs to be done to prevent their creation and spread.
The poverty of relying on companies self-policing is highlighted by the way that it has been done.
Those who benefited from Colab's laissez faire approach to governance might not agree. OpenAI initially refused to open source GPT-2 out of fear that it would be used. This motivated groups like EleutherAI want to use tools like Colab to develop and release their own language-generating models.
The commoditization of artificial intelligence models is part of a trend in the falling price of production, according to a member of Eleutherai. In his view, low-resource users should be able to gain access to better study and perform their own safety focused research on the models and tools that are made widely available.
Deepfakes have the potential to run counter to the principles of artificial intelligence. We aspire to be able to detect and deter abusive deepfake patterns, and will alter our policies as our methods progress.