Large models, such as DALL-E 2 and Imagen, are trained on vast numbers of documents and images taken from the web and absorb the worst aspects of that data as well as the best. OpenAI and Google acknowledge this.

The section on societal impact can be found on the imagen website, past the dragon fruit wearing a karate belt and the small cactus wearing a hat and sunglasses. The social biases and limitations of large language models are inherited by Imagen. There is a risk that Imagen has harmful stereotypes and representations, which is why we decided not to release it for public use.

It is the same kind of acknowledgement that OpenAI made when it revealed GPT-3. These firms know that their models can produce bad content, but they don't know how to fix it.