There has been a lot of craziness for Artificial Intelligence. If you spend a lot of time online, you may have bumped into images generated by artificial intelligence systems like DALL-E 2 or Stable Diffusion, jokes, essays, or other text written by the latest incarnation of Openai's large language model GPT-3.

It is obvious when a picture is created by an artificial intelligence. The output these models generate can make us think it was made by a human. Large language models create text that sounds correct but may be full of false positives.

It doesn't matter if it's just a bit of fun, it can have serious consequences if it's used to give health advice. Artificial intelligence could make it easy to produce reams of misinformation, abuse, and spam, and distort the information we consume. It could be concerning around elections.

How will we know if what we read on the internet is written by a human or a machine? I just published a story about the tools we have to spot text that is generated by artificial intelligence. Today's detection tool kit is not good enough.

The birth of a snowball of bullshit could be happening in real time.

Data sets built from the internet are used to train large language models. The outputs of the finished models are spread across the internet. Tech companies use artificial intelligence to train bigger, more convincing models, which humans can use to generate even more nonsense before it is scraper again and again.

The problem of artificial intelligence feeding on itself and producing pollution extends to images. Mike Cook, an artificial intelligence researcher at King's College London, told Will Douglas Heaven that the internet is contaminated with images made by artificial intelligence.

Any model that is made from now on will include the images that we made in the year 2022.