Powerful generative artificial intelligence systems have the ability to make images and text on demand.

Regulators are moving. Canada, the UK, the US, and China all have their own approaches to regulating high- impact artificial intelligence. General-purpose artificial intelligence seems to be more important than the core focus. There was no mention of general-purpose, foundational models when Europe's new regulatory rules were proposed. Our understanding of the future of artificial intelligence has changed quickly. An exemption of today's models would turn them into paper tigers that can't protect fundamental rights.

The shift in the paradigm of artificial intelligence was tangible. A few models, such as GPT-3, DALL-E, Stable Diffusion, and AlphaCode, are the foundation for almost all artificial intelligence systems. The models can feed a lot of downstream applications, including marketing, sales, customer service, software development, design, gaming, and education.

Foundational models can be used to create novel applications and business models, but they can also be used to spread misinformation, write malicious software, and plagiarize copyrighted content. Foundational models have been shown to have biases. The models could be used to radicalize people into extremists. They are capable of presenting false information in a convincing manner. If not deliberately governed, the flaws in these models could lead to widespread problems if not fixed.

One of the key drivers of erosion of accountability is the challenge of assigning moral responsibility for outcomes caused by multiple actors. End-to-end transparency is needed for accountability for the new artificial intelligence supply chains. We need to create a feedback loop between the three levels of the supply chain.

The developers of the models have acknowledged the importance of transparency in the models. DeepMind suggests that the harms of large language models should be addressed by collaborating with a wide range of stakeholders. Standford University has a Methodologies for Standardized Measurement and Benchmarking. The models are too powerful to operate without being assessed. Is it possible to evaluate the high-risk downstream applications with the information at hand?