The first serious accident involving a self-driving car occurred in March. The driver of the Model 3 claims that he was in autopilot mode when he hit the pedestrian.

In the US, the highway safety regulator is investigating a series of accidents where autopilot-equipped cars crashed into first-responder vehicles with flashing lights.

A highway car crash at night with emergency lights flashing
A Tesla model 3 collides with a stationary emergency responder vehicle in the US.

NBC / YouTube

It can be difficult to determine who should be held accountable for incidents such as these because the decision-making processes of self-driving cars are often opaque and unpredictable. The field of explainable artificial intelligence may provide some answers.

Who is responsible when self-driving cars crash?

Self-driving cars are new, but they are still machines made by manufacturers. We should ask the manufacturer if they have met their safety responsibilities when they cause harm.

The famous case of Donoghue v Stevenson was about a woman who found a dead snail in a bottle of ginger beer. The manufacturer was found negligent because his process was unsafe, not because he was expected to control the behavior of snails.

By this logic, manufacturers and developers of artificial intelligence-based systems may not be able to control everything, but they can take precautions to reduce risks. They should be held accountable if their risk management, testing, audits, and monitoring practices are not good enough.

How much risk management is enough?

It is difficult to test for every bug in advance in complex software. How will they know when to stop?

Courts, regulators, and technical standards bodies have experience in setting standards of care and responsibility for risky but useful activities.

The European Union's draft Artificial Intelligence regulation requires risks to be reduced as far as possible without regard to cost. Australian negligent law allows less stringent management for less likely or less severe risks, and it reduces the overall benefit of the risky activity.

Legal cases will be complicated by AI opacity

We need a way to enforce the risks once we have a clear standard. One approach could be to give the regulator the power to impose penalties.

Individuals harmed by artificial intelligence must be able to file a lawsuit. In cases involving self-driving cars, lawsuits against manufacturers will be important.

Courts will need to understand the processes and technical parameters of the artificial intelligence systems in order to be effective.

For commercial reasons, manufacturers prefer not to reveal such details. Courts already balance commercial interests with appropriate amount of disclosure to facilitate litigation.

The challenge may arise when the systems themselves are not transparent. Deep neural networks are a popular type of artificial intelligence that the developers can never be sure how or why it arrives at a given outcome.

‘Explainable AI’ to the rescue?

A new wave of computer science and humanities scholars is interested in opening the black box of modern artificial intelligence.

The goal is to help developers and end- users understand how artificial intelligence systems make decisions, either by changing how the systems are built or by generating explanations after the fact.

An artificial intelligence system mistook a picture of a husky for a wolf. The system focused on snow in the background of the image, rather than the animal in the foreground.

(Right) An image of a husky in front of a snowy background. (Left) An 'explainable AI' method shows which parts of the image the AI system focused on when classifying the image as a wolf.

How this might be used in a lawsuit is dependent on a number of factors. How much access the injured party is given to the system will be a key concern.

The Trivago case

An encouraging glimpse of what this could look like is provided by our new research.

The Federal Court fined global hotel booking company Trivago $44.7 million for misleading customers about hotel room rates on its website and in TV advertising. How Trivago chose the top-ranked offer for hotel rooms was a critical question.

The Federal Court set up rules for evidence discovery with safeguards to protect Trivago's intellectual property.

Even without full access to Trivago's system, the ACCC's expert witness was able to produce compelling evidence that the system's behavior was not consistent with Trivago's claim of giving customers.

This shows how lawyers and technical experts can work together. The process requires close collaboration and deep technical expertise.

In the future, regulators can require artificial intelligence companies to adequately document their systems.

The road ahead

Vehicles with various degrees of automation are becoming more common, and fully autonomously taxis and buses are being tested both in Australia and overseas.

Regulators, manufacturers, insurers, and users will all have roles to play in keeping our roads as safe as possible.

The article was written by Henry Fraser, a Research Fellow in Law, Accountability and Data Science, and Rhyle Simcock, a PhD. The original article is worth a read.