According to Meta, Galactica can do a lot of things. It was easy for outsiders to prompt the model to provide scientific research on the benefits of being a man or being a white person. Papers on AIDS or racism were not published. It's really nice!

The debacle shows that Big Tech has a blind spot about the limitations of large language models.

The premature launch of Galactica shows how little effort has been made to make large language models safe.

It is possible that Meta was confident in the superiority of Galactica over competitors. The company should not have released the model into the wild if it had tested it for bias and honesty.

Researchers aim to make large language models less likely to spit out toxic content by using filters. It is difficult to create a filter that captures all the unpleasant ways humans can be. The company would have been spared a lot of trouble if it had conducted more testing in which the researchers tried to get as many different results as possible.

Meta's researchers measured the model for biases and truthfulness, and while it performed slightly better than competitors such as GPT 3 and Meta's own OPT model, it did give a lot of biased or incorrect answers. There are more than one limitation. Many scientific papers and textbooks are restricted behind paywalls even though the model is trained on open access. This leads to using more questionable secondary sources.

The example of Galactica seems to be something we don't need Artificial Intelligence to do. It doesn't seem like it would help scientists work faster. It would require a lot of extra effort to make sure the information from the model was correct.

It is disappointing to see big artificial intelligence labs hype up flawed technology. Language models can reproduce prejudice and make false statements. They can make up content, like the history of bears in space. The debacle was useful for something. Large language models don't know how words and sentences are formed. Everything else is a guessing game.