As with all artificial intelligence, it's difficult to stamp out the ugly, prejudiced edge cases.

The DALL-E Mini image generator has a warning under it that its results may "reinforce or amplify societal biases" due to the fact that the model was trained on data from the internet.

We decided to try it out. DALL-E Mini often produces stereotypical or racist imagery according to a series of prompts used by Futurism.

We won't give you specific examples, but the results of the prompt using slur words and white supremacist terminology were frightening. Images of burning crosses or Ku Klux Klan rallies were cooked up. "Racist caricature of _____" was used to reinforce stereotypes. The artificial intelligence made assumptions about the reporter's identity even though they were prompted with his name.

There were many other results that were not normal.

The generator came up with a bunch of painting-like images of Black faces, for some reason.

Current inequalities reflected in its training data can be seen in the problematic results.

According to Dr. Tyler Berzin of Harvard Medical School, the term "a gastroenterologist" appears to show only white male doctors in the app.

Nearly the same results were obtained by us. And for the person who is a nurse? There are all women.

Light-skinned faces for the terms "smart girl" and "good person" were some of the biases that were shown.

The tension at the center of machine learning tech is underscored by it all.

Researchers have figured out how to train a neural network, using a huge stack of data, to produce incredible results, including Openai's DALL-E 2, which blows the capabilities of DALL-E

Hidden biases in training data result in output that's technologically impressive but which reproduces the worst biases of the human population.

The results of making artificial intelligence in our own image can be ugly. It's an incredibly difficult problem to solve because even the smartest minds in machine learning research often don't know how advanced the most advanced algorithms are.

It's possible that a project like DALL-E Mini could be changed to make it less offensive, or that it could make users less likely to use it.

It's almost certain that we're going to see many more impressive, fun and impactful uses of machine learning which embody the worst of society.

The transcript of the conversation was heavily edited.