According to a report from a person familiar with the matter, the company behind Stable Diffusion has raised new funding that values the company at $1 billion. The company focuses on open-source models that anyone can use without oversight and it is a significant validation of their approach to artificial intelligence.

In a press statement, Stability AI said it raised $101 million in a round led by Coatue, Lightspeed Venture Partners, and O'Shaughnessy Venture, and that it will use the money to accelerate the development of open artificial intelligence models.

It is possible for anyone to build on Stability AI's code.

Stable Diffusion is one of the leading examples of text-to- image artificial intelligence. The software of Stability Artificial Intelligence is open-sourced. Anyone can use the company's code to power their own offerings.

A commercial version of the model, called DreamStudio, is offered by Stability Artificial Intelligence, and it plans to make money by developing the underlying infrastructure and customizing versions of the software for corporate clients. The company is located in London and has around 100 employees around the world. Over the next year, it says it will increase this to 300 staff. The company makes open-sourced versions of large artificial intelligence models.

Sri Viswanath, who is joining the board as part of the deal, said that it was the open-source approach that made Stability AI different from its rivals. By giving the broader public the tools to create and innovate, open source will act as a catalyst for the growth of artificial intelligence.

The open-source nature of the software makes it easy for users to make potentially harmful images. Other developers like OpenAI have taken a more cautious approach to this technology. The ideology of Stability Artificial Intelligence is more libertarian than the other way around.

In September, the company's founder, Emad Mostaque, said that people's responsibility is whether they are ethical, moral, and legal in how they operate this technology. People create bad stuff with it. I don't think it will be much of a percentage of use.

There are questions about the legal issues inherent in text-to- image models. All of these systems are trained on data from the web, including copyrighted content. Some people whose work has been used without their consent are interested in pursuing legal action. As companies like Stability Artificial Intelligence prove their ability to transform others' work into their own profit, these issues will likely become more acute.