One of the best-known homes for artists on the internet is DeviantArt, which has no idea how to handle artificial intelligence. Last week, Deviant Art launched DreamUp, a tool that lets anyone make pictures from text. It is part of a larger DeviantArt attempt to give more control to humans.

Stable Diffusion is an open source image-spawning program. If you sign up for DeviantArt, you can get five prompts for free, and people can buy between 50 and 300 per month with the site's Core subscription plans. DreamUp is unique in that it is built to detect when you are trying to impersonate another artist. It is supposed to stop you.

Artificial intelligence is something that can't be avoided. The technology is going to get stronger over time according to Liat Karpel Gurwicz. We think that we need to make sure that people are respectful of creators, that they are transparent about what they are doing, and that they have the permission of their creator.

Artificial intelligence can't be avoided.

According to Gurwicz and Levy, Deviant Art isn't doing training for DreamUp. At the point DeviantArt adopted it, the tool was trained on whatever data StabilityAI had collected. DeviantArt can't remove your art from the Stability dataset if it was used to train DreamUp's model. DeviantArt is banning the use of certain artists' names in prompt questions, as well as the names of their aliases or individual creations. To request this opt-out, artists need to fill out a form.

Stable Diffusion was trained on a lot of web images and the majority of the creators didn't agree to inclusion. Adding a phrase like "in the style of" to the end of the prompt is one way to reproduce an artist's style. It's become an issue for some contemporary artists and illustrators who don't want automated tools copying their distinctive looks

There are problems that crop up on other platforms. Questions about consent have led to the banning of artificial intelligence-generated work on the internet. There is a kind of compromise on the issue of banning artificial intelligence art because of the partnership between Bria and the stock images platform.

There are no plans for Deviant Art. We embrace all types of creativity. We don't believe in censoring any type of art.

DreamUp attempts to mitigate the problems by limiting intentional copying without permission. There aren't any models or data sets that weren't trained without the permission of the creator. Stable Diffusion is one of the models that is likely to be true of other models as well.

He said that they knew that whatever model they started working with would have baggage. There is only one thing we can do with DreamUp and that is prevent people from taking advantage of it.

DeviantArt will credit artists if they are copied. If you post a DreamUp image on DeviantArt, the interface asks if you are working in the style of a specific artist and if you have a name. Deviantart can see what prompt the creator used and make a judgement call if someone flags a DreamUp work as improper. There are works that can be taken down if they don't include credit or if they use tactics like spelling a name wrong.

The approach seems pragmatic. It doesn't address the abstract issue of artists' work being used to train a system, but it does block the most obvious problem that issue creates.

This baggage would come with whatever model we started working with.

There are a number of practical flaws. DreamUp allows artists to submit requests to have their names blocked. The aim of the system is to give control to artists on the platform, rather than non-DeviantArt artists who object to the use of artificial intelligence. The blocking only works on Deviant Art. If you want to switch to another Stable Diffusion implementation, you can use the platform.

The underlying training question has been addressed by a separate tool rolled out by Deviant Art. There is an optional flag that artists can tick to indicate if they want to be included. The "noai" flag is meant to create certainty in the murky landscape where artists' work is usually treated as fair game. Other art platforms can use the tool because it is open-sourced.

Deviant Art isn't doing any training of its own. The flag must be respected in order to comply with DeviantArt's terms of service. It seems to be more of a goal in practice. Levy says that the artist will signal to those platforms if they gave their consent or not. It is on those companies to decide if they want to look for that content or not. When I spoke with Deviant Art last week, they didn't have an agreement to respect the flag going forward and retroactively remove images based on it.

The flag made artists feel like their consent was being violated. The opt-out system asked them to set a flag if they didn't like it. The decision probably didn't have much effect since companies were already doing the same thing. Some users found it offensive. IanFay called the move extremely scummy on his account. The videos criticized the decision. All artists are going to be affected by this problem.

DeviantArt has offered tools that protect artists from some other tech that many are not in favor of. It launched a program to detect and remove art that was used without permission.

Deviant Art tried to address the criticism of its new tools. The "noai" flag is set by default so artists have to explicitly signal their agreement to have images scrapers. The terms of service have been updated to explicitly order third-party services to respect artists' flags.

Smaller platforms can only do so much if they don't have the expertise to do it. There is no legal guidance for generative art. The agenda is being set by a number of companies, including OpenAI and Stability. There is no easy way to navigate the system without touching the third rail. It's not something that DeviantArt can fix on their own. Until there is proper regulation in place, it requires these models and platforms to think about, ethically, what's right and what's fair.

Deviantart is trying to encourage that line of thinking, but it is still working out some major issues.