Product liability laws will be updated to address the risk of damage caused by artificial intelligence systems and address other liability issues arising from digital devices.

The proposal for revisions to long-standing EU products rules was presented by the justice commissioner.

The EU believes its approach will give businesses legal certainty and help to foster consumer trust in their products.

According to Reynders, current liability rules aren't equipped to handle claims for damage caused by artificial intelligence. We need to change this and make sure that all consumers are protected.

Disclosure requirements and a presumption of causality are contained in the Directive.

He said that victims will have a chance to prove their liability claims in court. The right to access to justice becomes effective when the parties have the same tools to make their case.

The proposal will make sure that the claims are not hindered by the difficulties of proof.

Both individuals and businesses would be protected by the commission's proposal.

The draft legislation isn't limited in scope to an original maker of an artificial intelligence system, but rather a group of people who are at risk of harm due to errors or omissions. It doesn't limit liability to the producer but opens it up to the whole supply chain, which is more broad than the earlier proposal.

Civil society criticisms of the act for lack of rights and avenues for individuals to seek remedy when they are negatively impacted by the act is an interesting difference.

According to the Commission, it will make it easier for people to file a lawsuit if they are harmed by an artificial intelligence. Safety-oriented rules aim to reduce risks and prevent damages but those risks will never be eliminated completely. Liability provisions are needed to make sure that compensation is realistic in the event of damage. The purpose of the Act is to prevent damage, but the Liability Directive lays down a safety net for compensation in the event of damage.

The principle is simple according to Reynders. When a product that functions thanks to artificial intelligence causes damage or is the result of an error, the new rules apply.

The example of damage caused by a drone operator that is delivering packages but not respecting user instructions is a scenario that would be covered. A manufacturer is not applying the necessary measures for recruitment services using artificial intelligence. Operator giving incorrect instructions to a mobile robot that collides with a parked car

The obscurity of these technologies, their unique nature, and their extreme complexity make it difficult to get compensation for such products.

The directive proposes to circumvent the "blackbox of artificial intelligence" by giving victims the power to obtain documents or recorded data generated by an artificial intelligence system to build their case, with provisions also put in place to protect commercially sensitive information.

A rebuttable presumption of causality will be introduced in order to alleviate the burden of proof.

If the victim can show that the liable person committed fraud by not complying with a certain obligation, the court can assume that this non-compliance led to the crime.

He said that a potentially liable person could rebut the presumption if they could show another cause.

All types of damage that are currently compensated for in each Member State's national law are covered by the directive.

According to the Commission, the new rules will cover compensation of any type of damage covered by national law, which raises an interesting possibility of privacy litigation.

Under the incoming directive, could Facebook be sued for privacy harms? There is a thought.

The Commission doesn't agree with the idea of the revised liability framework empowering citizens to sue directly for damages over infringements of their fundamental rights. The aim of the draft Act is to prevent such incidents from happening. People can turn to national liability rules for compensation if they happen to.

It also says that a damages claim can be brought for data loss.

Not just high risk AIs…

While Reynders made mention in today's press briefing of the "high risk" category of artificial intelligence systems that is contained in the Artificial Intelligence Act, he said that is not actually the Commission's intent.

Revisions to the EU's existing Liability Directive which have also been adopted today paves the way for the Artificial Intelligence Liability Directive to slot in uniform rules around artificial intelligence products, as well as some further modernization focused on liability rules for digital products, such as allowing compensation for damage when products

The EU wants to bring in mandatory cybersecurity requirements for smart products that apply throughout their lives.

The EU wants to create legal certainty to help support circularity as part of its push for a green transition, which is why it wants to revise the EU product liability rules which date back to 1985.

The product liability directive has been a cornerstone of the internal market for four decades. It will respond to the challenges of the decades to come, thanks to today's proposal. New rules will reflect global value chains, foster innovation and consumer trust, and provide stronger legal certainty for businesses involved in the green and digital transition.

The EU's co-legislative process means that the Commission's product liability proposals will be debated and possibly amended by the European Parliament and the Council if they become EU law. It's not clear how the policy package might change.

The EU unboxes its plan for smart device security

Europe lays out expanded ecodesign rules with plan for digital product passports