The European Union's planned risk-based framework for regulating artificial intelligence includes powers for oversight bodies to order the withdrawal of a commercial system or require that an artificial intelligence model be retrained if it is deemed high risk, according to an analysis of the proposal.

The EU's Artificial Intelligence Act has a lot of enforcement power, assuming the patchwork of Member State-level oversight authorities can effectively direct it.

The draft Act may still fall short of the goal of fostering broadly, as it continues to face criticizm over a number of structural flaws. On paper, there looks to be some powerful regulatory powers.

The European Commission put out a proposal for an Artificial Intelligence Act just over a year ago.

The draft Act defines high-risk systems as: biometrics identification and categorisation of natural persons, management and operation of critical infrastructure, Education and Vocational training, Employment, workers management and access to self-employment, and access to and enjoyment of essential private services and public services.

Most use cases for artificial intelligence won't face serious regulation under the Act as they would be judged to be low risk, so largely left to self regulate.

There is also another category of artificial intelligences, such as deepfakes and chatbots, which are judged to fall in the middle and are given some specific transparency requirements to limit their potential to be misuse and cause harms.

Civil society groups warned last fall that the proposal falls far short of protecting fundamental rights from harms like scaled discrimination and blackbox bias.

A number of EU institutions have called for a more fulsome ban on remote identification than the Commission chose to include in the Act.

Ban biometric surveillance in public to safeguard rights, urge EU bodies

Major revisions to the proposal are unlikely at this late stage of the EU's co-legislative process. The Council and Parliament are still debating their positions, so there is a chance for some detail to be changed.

A leading internet law academic, who holds a chair in law, innovation and society at a university, has analyzed the Act for the U.K.-based Ada Lovelace Institute and found some of the limitations of the framework.

She suggests that the EU's specific limitations mean it isn't the best template for other regions to look at when thinking about how they should regulate artificial intelligence. The Act can't touch on military uses of artificial intelligence, for example, most of which you would expect to be risk-ridden by default.

Unsurprisingly, to anyone with a passing understanding of machine learning, physical product regs for things like washing machines and toys don't scope well to an artificial intelligence system.

The onus of duties and rights is on the initial provider of an artificial intelligence system.

The Act's category of users, who only have a, is too limited a way to oversee how artificial intelligence is developed and deployed.

She writes thatTranslating this complex web of actors, data, models and services into a legal regime that places duties and rights on certain identifiable actors is extremely hard. It can be compared unfavourably to recent developments in the case law, where courts are attempting to distribute responsibility for data protection among various controllers at the most relevant times.

The lack of recourse in the Act for actual humans to raise complaints about the impact of an artificial intelligence system upon them personally is one of the major shortfalls she discusses in the paper.

End users of artificial intelligence systems have been neglected and their human dignity obscured by the design of the Artificial Intelligence Act, which was primarily based on product safety and not from other instruments. Her assessment is that this is incompatible with an instrument that protects fundamental rights.

The list of systems the Commission has said should be prohibited, without an explanation of how it came up with this handful, is critical of her.

She says that the Act does not allow for changes to the prohibited list or for the creation of new top-level categories to be added to the high-risk section.

As it seeks to encourage innovation, the Commission likely wanted to create certainty for the market by capping the high-risk lists. For years, its rhetoric around the Act has been heavy with talk of fashioning ethical guardrails for artificial intelligence that reflects European values. The balance there looks questionable.

The EU Act is described by her as the world's first comprehensive attempt to regulate artificial intelligence, addressing issues such as data-driven or algorithmic.

She writes in the paper that the AI Act is an excellent starting point for aholistic approach to regulation.

She tells us that the legislation will most likely have minimal effect, assuming no substantial amendments to address key criticizms.

She suggests that the EU put most of their eggs in the Digital Services Act and the Digital Markets Act.

Unless you're basically the Chinese state, it doesn't include search, social networking, or profiling. It focuses on things that are already regulated by the markets, such as credit scoring, where there is a long history of people checking to see if the algorithm seems fair. In the U.K. there would have been judicial review. We know that the state is doing bad things and that's what it's aimed at. The recent scare stories, sentencing systems, crime prediction, all that kind of stuff.

It's not clear whether the Act will be able to stop the harms from state-deployed algorithms in the public sector. It can be difficult to understand how these systems work and how they are being deployed.

One bright spot for defenders of fundamental rights inside the EU is the assessment that gives public oversight bodies the power to take decisive steps to prevent damage by high risk artificial intelligence systems. The powers could allow oversight authorities to order retraining of models.

In her view, the power of individual users to request deletion of their personal data isn't a given, but it's not clear how the regulators are able to wield it.

The company was ordered to destroy the data of European citizens but the regulators could not order the destruction of the underlying model. It is unclear whether a US-based entity with no EU place of establishment could be made to comply with an order to destroy its core algorithm.

The emergency procedure taken by the Italian DPA against TikTok last year was related to child safety concerns and led to the social network agreeing to remove hundreds of thousands of accounts.

Imagine if the EU regulators had the power to order TikTok to stop using its artificial intelligence because of child safety concerns.

The market surveillance authority is likely going to be the data protection in the Act. They can see that all appropriate corrective action is taken.

I think this is what you can require the system to be withdrawn from the market in order to have it deleted. I think there is enough scope for it to be trained if the data had been removed. At least I don't think we are.

The powers of market surveillance authorities are very wide.

I think it gives more powers. You had the right to destroy dangerous toys if you thought about it like a consumer product. You didn't just say you couldn't import them. You had the right to destroy them, and this is kind of parallel powers.

Users may be able to ask for their own data to be deleted and therefore removed from an artificial intelligence model, or even be asked by a regulatory authority to do so. She sees the regulatory fire power being increased.

You will be able to ask for your data to be deleted from their database, but that doesn't really change the model. She argues that these are not good remedies.

If you want the model to be deleted or retained, you need better data. That is exactly what the Act is about. You could argue that the authority could say that you need to meet certain requirements in order to provide a new model, a new system that falls in high risk, and therefore you need to do it all over again.

It gives you a chance to make a better product at the end than just the powers that the regulators have.

Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn

Europe lays out plan for risk-based AI rules to boost trust and uptake