Stable Diffusion users are angry about an update to the software that allows it to generate images in the style of certain artists.
Stable Diffusion Version 2 was announced early this morning in Europe. The update improves features like upscaling and in-painting, as well as re-engineering key components of the model. Stable Diffusion can't generate certain types of images that have attracted both controversy and criticism due to the changes. Nude and pornographic output, pictures of celebrities, and images that mimic the artwork of specific artists are included.
The model has been changed.
One user commented that the model had been nerfed. One person said it was an unpleasant surprise.
Users note that asking version 2 of Stable Diffusion to generate images in the style of Greg Rutkowski, a digital artist whose name has become a shorthand for producing high quality images, no longer creates artwork that closely resembles his own. These two images should be compared. One user commented on what they had done togreg.
The changes to Stable Diffusion are notable, as the software is hugely influential and helps set norms in the fast- moving generative artificial intelligence scene. Stable Diffusion is an open source model. Developers can integrate the tool into their products for free if they so choose. Stable Diffusion has attracted a lot of criticism because it has less constraints in how it is used. Many artists are annoyed that image generating models were trained on their artwork without their permission and can now reproduce their styles. There is an open question as to whether or not this type of copying is legal. Training artificial intelligence models on copyrighted data is likely legal, but some use cases could be challenged in court.
Users of Stable Diffusion theorize that the changes to the model were made in order to avoid legal challenges. Emad Mostaque did not reply if this was the case in a private chat. Users have speculated that the images of artists have been removed from the training data. Changes made to how the software stores and retrieves data have reduced the model's ability to copy artists.
According to Mostaque, there has been no specific filters for artists here. He expanded on the technical underpinning of the changes in a message posted on the social networking site.
Nude and pornography have been removed from Stable Diffusion's training data. NSFW output is being generated by artificial intelligence image generators. These models can be used to create images of child abuse and nonconsensual pornography.
Discussing the changes Stable Diffusion Version 2 in the software's official Discord, Mostaque notes this latter use case.
The user on Stable Diffusion said the removal of the content was against the spirit of the open source community. The user said that it was up to the end user to decide if they wanted to do nudity or not. The open source nature of Stable Diffusion means nude training data can easily be added back into third-party releases.
The software's new ability to produce content that matches the depth of an existing image has been praised by many users. The changes made it harder to quickly produce high-quality images, but the community will likely add back this feature in future versions, according to others. The changes are better at interpreting prompts and making coherent photographic images according to one user.
The new model is similar to a pizza base where anyone can add ingredients. He said that a good model should be usable by everyone.
Stable Diffusion will use training datasets that will allow artists to opt-in or opt-out, a feature that many artists have requested, and that could help mitigate some criticisms. Mostaque said that they were trying to be transparent as they improved the base models.
There is a public demo of Stable Diffusion Version 2 that can be accessed here.