The pursuit of larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hate speech and misinformation, continue to unfold. A few days ago, Meta released its "Galactica" LLM, which is claimed to "summarize academic papers, solve math problems, generate Wiki articles, write scientific code, and more" The public demo was taken down three days later after researchers generated research papers on a wide variety of subjects, including why homosexuals are evil and the benefits of committing suicide.
This race hasn't stopped at LLMs, but has moved on to text-to-image models like Openai's DALL-E and Stabilityai's Stable Diffusion, models that take text as input and output generated images based on that text Many researchers and journalists report on the dangers of these models, which include child pornography, bias, and spreading misinformation. Instead of slowing down, companies are taking safety features out of the equation. The sharing of faces on social media was restricted by Openai. Openai removed the restrictions after StabilityAI, which raised $101 million with a $1 billion valuation, called them paternalistic.
We are poised to see more proliferation of harmful models because of the brand of "ai safety" popularized by OpenAI. The influence begins early, with effective altruists giving community building grants to recruit at major college campuses, as well as teaching classes on artificial intelligence safety at elite universities.
Most of the funding for Anthropic came from billionaires like Talin, Muskovitz and Bankman. One of the largest and most influential machine learning conferences in the world is advertised as being sponsored by FTX Future Fund, a charity whose team resigned two weeks ago. $100,000 is an amount I have not seen in any academic discipline.
Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an " unimaginably great future"
We can use technology to serve us instead. Te Hiku Media created a data license so that any data taken from the Mori would benefit them first. We need to liberate our imagination from the one we have been sold so far: saving us from a hypothetical AGI apocalypse. Saving us from a hypothetical AGI apocalypse is one of the things we need to liberate our imaginations from.