Proposed EU rules could limit the type of research that produces cutting-edge AI tools like GPT-3, experts warn in a new study.

The bloc's regulation of open source artificial intelligence would create legal liability for general purpose systems and undermine their development according to a piece published by the think tank. Guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity, would have to be followed by open source developers.

The author asserts that if a company deploys an open source artificial intelligence system that leads to a disaster, they could try to blame the open source developers.

Alex Engler wrote that this could concentrate power over the future of artificial intelligence in large technology companies and prevent research that is important to the public's understanding of it. The E.U.'s attempt to regulate open-source could create a convoluted set of requirements that endangers open-source contributors.

The European Commission, the EU's politically independent executive arm, released the text of the Artificial Intelligence Act, which aims to promote "trustworthy" deployment in the EU as they solicit input from industry ahead of a vote this fall. According to some experts, the artificial intelligence act would impose a lot of requirements on open efforts.

Some categories of open source artificial intelligence, like those used for research and with controls to prevent misuse, are included in the legislation. It would be hard to prevent these projects from making their way into commercial systems where they could be abused.

Stable Diffusion was released with a license prohibiting certain types of content. It found an audience within communities that use artificial intelligence to make pornographic fakes of celebrities.

The founder of the Allen Institute for Artificial Intelligence agrees that the current draft of the Act is problematic. The burdens introduced by the rules could have a chilling effect on the development of open text-generating systems, which he believes are enabling developers to catch up to Big Tech companies.

The road to regulation hell is paved by the EU. The burden should not be the same for open source developers. It should always be the case that free software can be provided, even if they can't afford to comply with EU regulations and have a chilling effect on academic progress.

EU regulators should focus on specific applications of the technology instead of trying to regulate it all. He said there is too much uncertainty and rapid change in artificial intelligence for the regulatory process to be effective. Artificial intelligence should be the subject of regulation.

Some practitioners don't think the Act needs to be further amended. According to Mike Cook, a researcher with the Knives and Paintbrushes collective, it is perfectly fine to regulate open source artificial intelligence a little more heavily than necessary. He believes that setting a standard can be a way to show leadership around the world.

Cook said that people who want to do away with all regulation and have free rein is not a view he puts a lot of stock into. It's okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and make more money.

China-style state social credit scoring is one of the banned uses of the EU's risk based approach. It might require thousands of regulations for each product type if the regulations were to target product types as opposed to product categories.

The providers of systems like open source large language models might be held liable under the Artificial Intelligence Act according to an analysis written by a law professor. She says that the onus is on the downstream deployers to manage the uses and impacts of the system.

She writes that it could be as significant as how it was originally built. The Act doesn't take enough notice of this and doesn't regulate the many actors who get involved in the supply chain.

At Hugging Face, CEO Clément Delangue, counsel Carlos Muoz Ferrandis, and policy expert Irene Solaiman are in favor of regulations to protect consumer safeguards, but they don't like the idea of the Artificial Intelligence Act. It is not clear if the legislation would apply to the pre-trained machine learning models at the heart of the software itself or only to the software itself.

This lack of clarity, coupled with the non-observance of ongoing community governance initiatives such as open and responsible artificial intelligence licenses, might hinder upstream innovation at the very top of the artificial intelligence value chain. If you already place overly heavy burdens on openly released features at the top of the artificial intelligence innovation stream, you risk hindering incremental innovation, product differentiation and dynamic competition.

Hugging Face advocates for improved governance tools regardless of the final language of the Act, like licenses and model cards that include information about the intended use of an artificial intelligence system. According to Delangue, Ferrandis and Solaiman, responsible licensing is becoming a common practice for major artificial intelligence releases.

They said that open innovation and responsible innovation are not mutually exclusive ends. The intersection of both should be the focus of regulatory efforts as it is right now for the artificial intelligence community.

It is possible that that well is doable. It will likely be years before artificial intelligence is regulated in the EU given the many moving parts involved.