Pineau helped change how research is published in several of the largest conferences, introducing a list of things that researchers must submit alongside their results, including code and details about how experiments are run. She has championed that culture since joining Meta.

She says that she is here because of her commitment to open science.

Pineau wants to change how we judge artificial intelligence.

It is a bold move for Meta to give away a large language model.

Weighing the risks

Margaret Mitchell, who was forced out of her job at the Artificial Intelligence ethics research center at the end of 2020, sees the release of OPT as a positive move. She thinks there are limits to transparency. The language model has been tested. Do the benefits outweigh the harms, such as the generation of misinformation or racist and misogynistic language?

She says that releasing a large language model to the world where a wide audience is likely to use it or be affected by its output comes with responsibilities. The model will be able to generate harmful content through downstream applications that researchers build on top of it.

Pineau says that the point is to release a model that researchers can learn from.

There were a lot of conversations about how to do that in a way that lets us sleep at night, knowing that there is a non-zero risk in terms of reputation, a non-zero risk in terms of harm. She ignores the idea that you shouldn't release a model because it's too dangerous.