Considering how powerful AI systems are, and the roles they increasingly play in helping to make high-stakes decisions about our lives, homes, and societies, they are rarely scrutinized.

Thanks to the field of artificial intelligence audits, that is beginning to change.

Artificial intelligence researchers Joy Buolamwini and Timnit Gebru found in an audit of commercial facial recognition systems that they didn't recognize darker skinned people as well as white people. The error rate was higher for dark-skinned women. A body of critical work that exposed the bias, discrimination, and oppressive nature of facial-analysis algorithms was stimulated by the audit. We hope that by doing these sorts of audits on different artificial intelligence systems, we will be able to find problems and have a broader conversation about how the systems are affecting our lives.

The demand for audits is being driven by the fact that regulators are catching up. Big tech companies will have to conduct annual audits of their artificial intelligence systems in the European Union, and the upcoming Artificial Intelligence Act will require audits of "high-risk" systems.

There are a lot of obstacles that need to be overcome, but it is a great ambition. Alex Engler told me that the few audits that do occur are mostly ad hoc and vary in quality. HireVue, an artificial intelligence company, implied in a press release that an external audit had found no bias. The audit had not examined the company's models and was subject to a nondisclosure agreement so there was no way to verify what it found. It was simply a publicity stunt.

One way the community is trying to address the lack of auditors is through bias bounty competitions, which work in a similar way to Cybersecurity bug bounties. One such competition was launched by a group of volunteers. It is hoped that it will be the first of many.

Incentives for people to learn the skills needed to do audits and to start building standards for what audits should look like are part of the idea. More about it can be found here.

The growth of these audits suggests that one day we may see warnings about the dangers of artificial intelligence. Chemicals and food have regular audits to make sure they are safe. Is it possible that this could become the norm in artificial intelligence?