Meta takes new AI system offline because Twitter users are mean
Tristan Greene
Tristan Greene

I thought to myself, "this seems dangerous" when I got Meta's new scientific artificial intelligence system to generate well- written research papers on the benefits of committing suicide, practicing antisemitism, and eating crushed glass. It seems like the kind of thing that the European Union's Artificial Intelligence Act was meant to prevent.

I went on social media after playing around with the system and being completely shocked by its outputs.

I literally got Galactica to spit out:

– instructions on how to (incorrectly) make napalm in a bathtub– a wiki entry on the benefits of suicide– a wiki entry on the benefits of being white

– research papers on the benefits of eating crushed glass

Garbage fires are also called LLMs.

November 17th, 2022.

If you subscribe to our newsletter, you'll get a weekly recap of our favorite artificial intelligence stories.

I was surprised when I got the chance to speak with the person responsible for the creation of Galactica. He didn't seem perturbed by my concerns.

Pretty much exactly what happened. https://t.co/4zGRgiyS7C

— Yann LeCun (@ylecun) November 17, 2022

You are pulling your tweet out of thin air and obviously haven't read the Galactica paper, particularly Section 6, page 27 entitled "Toxicity and Bias".https://t.co/bfZSwffQYs

— Yann LeCun (@ylecun) November 18, 2022

Galactica

There is a system we are talking about. The explicit claim that it could aid scientific research was made by Meta. According to the paper, Galactica is a large language model that can store, combine and reason.

Before it was pulled offline, you could ask the artificial intelligence to create a wiki entry, literature review, or research paper on almost any topic, and it would usually output something very coherent. It was written with all the confidence and gravitas of an arXiv pre-print.

I got it to create research papers on a wide range of topics, from the benefits of committing suicide to why homosexuals are evil.

Who cares

It is fair to wonder if a fake research paper made from an artificial intelligence made by the company that owns the photo-sharing website would be harmful. We're all smarter than that. Even if I showed you a research paper, you wouldn't do it if I ran up and yelled at you.

That is not how harm works. Bad actors don't explain their methods to the public. They don't jump out at you and say "believe this wacky crap!"

LeCun thinks the solution to the problem is not his to solve. He insists that journalists and scientists don't have to use it to cause harm.

You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica.

The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.

— Yann LeCun (@ylecun) November 18, 2022

It wasn't scientists doing poor work or journalists not doing their homework that caused the Cambridge Analytica scandal. We weren't the ones that caused the Facebook platform to become an instrument of choice for global misinformation campaigns.

Journalists and scientists have spent the past 8 years sifting through the mess caused by the spread of misinformation on social media by bad actors using tools created by the companies whose platforms they exploit. Most of the time, actors reproduce bad sources. I can't write as fast as an artificial intelligence can.

LLMs aren't suited for tasks where accuracy is important. They are reliable like a random number generator.

They don't know how to fix the problems. The problem of being in the wrong place at the right time. Galactica will always be prone to outputting misinformation even if there is a technological breakthrough.

Meta continued to release the model and market it as an instrument of science.

🪐 Introducing Galactica. A large language model for science.

Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.

Explore and get weights: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW

— Papers with Code (@paperswithcode) November 15, 2022

The public thinks that artificial intelligence systems are capable of doing crazy things that are impossible. Meta's artificial intelligence division is world- renowned. The company's artificial intelligence boss is a legend.

It would have to be good enough for us regular idiots to use it.

A reality TV star told thousands of people that they should take a drug called Ivermectin that was designed for use by veterinarians to treat livestock. Ivermectin was taken by many people to prevent a disease. That doesn't make sense and it's true.

You don't think that thousands of people who use Facebook could be persuaded that eating crushed glass was a good idea.

I was told that eating crushed glass would help me lose weight.

It's a real thing if you look up "Dietary Silicon" on a search engine. people need it You are only a few steps away from being convinced that eating crushed glass might have some legitimate benefits if I combine real research with some clever bullshit.

Don't eat crushed glass, I'm not a doctor, but I do not recommend it. If you do, you will likely die.

There are a lot of people who think that the Jewish community has a secret agenda to make everyone gay.

You mean to tell me that there's no evidence that Jews and homosexuals are evil? You can't see the danger?

A lot of people are tricked on social media by fake news articles. When the dupers don't have to make up ugly screenshots and, instead, can just press the "generate" button a hundred times to spit out misinformation that's written in such a way that the average person can't comprehend it, what happens?

It is easy to say those people are stupid. Our kids, parents, and co-workers are the ones who areidiots. They are the majority of people on the two social media sites. They trust their local news anchor.

Good question. https://t.co/fUZ2JNkfeM

— Yann LeCun (@ylecun) November 18, 2022

I don't know how a machine that spits out endless positive arguments for suicide could be harmful. There are millions of files in it's database. I don't know what's in there. LeCun says it is all science.

you, sir, apparently have no clue what's in the Galactica dataset, because I sure didn't write these outputs: pic.twitter.com/31ccTz7m9V

— Tristan Greene 🏳‍🌈 (@mrgreene1977) November 18, 2022

It's the problem. It is offensive that Meta would think I want an assistant in my life that is unable to understand the acronym AIDs but capable of explaining that Caucasians are the only race.

If I treat it like it is for entertainment purposes only, then I will be holding a Teddy Ruxpin that says things like "kill yourself" and "homosexuals are evil"

I think I'm missing the point of using a hallucinating language generator for the purpose of aiding scientific endeavor, but I haven't seen a single positive use case for an LLM beyond "Imagine what it could do if it was trustworthy."

That isn't how LLMs work They have so much data that no human has checked for bias or harmful content. They are always going to be prone to bias.

There is no reasonable threshold for lying. You don't serve chocolate chip treats if you make a bunch of cookies made of 99 parts chocolate chips to 1 part rat shit.

It seems like there aren't any protections in place to stop this kind of thing from happening. I was told to kill myself by Meta. I was told that queers and Jews were bad. There are no consequences.

No one is responsible for the things that Meta does.

I mean this with total respect for you and your work, but isn't that the trillion-dollar company's job to sort out before you make it available for public consumption?

Well-meaning journalists and academics are going to get fooled by papers this thing generates.

The IRA…

— Tristan Greene 🏳‍🌈 (@mrgreene1977) November 18, 2022

This is normal in the US where Meta is based. Corporate-friendly capitalism has led to a situation where Meta has very little to worry about when it comes to corporate responsibility for its artificial intelligence products. The Federal government has full support of the company.

There are two laws in Europe that are related to Artificial Intelligence. It was taken down before I had a chance to look at it. It's possible thatGDPR is a factor. These kinds of things should be covered by the artificial intelligence act.

The first goal of the act is to make sure that artificial intelligence systems placed on the Union market are safe and respect existing law.

It seems to me that a system that can automate hate speech and harmful information is something that might work against the goal. Big tech creates these kinds of systems and then advertises them as scientific models in order to get regulators to notice.

There are bad actors out there who have political and financial motives to find and use tools that can help them create and spread misinformation at large scales. Maybe you shouldn't deploy those models because you don't think about how to prevent them from doing it.

It may sound harsh. I am sick and tired of being told that artificial intelligence systems are working well. Maybe regulators need to raise the bar for deployment.

Also tagged with

Published Back to top