The movement to hold AI accountable gains more steam



A Michigan man wrongly accused of fraud had to file for bankruptcy, while automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment, and Black Facebook users were subjected to more abuse. Other automated systems flagged people with dark skin more often for cheating on tests.

Efforts are being made to better understand how artificial intelligence works. The New York City Council passed a law requiring audits of the hiring and promotion practices of employers. The first of its kind in the nation, the law requires employers to bring in outsiders to check out bias in their software. When artificial intelligence plays a role in deciding who gets hired or promoted, employers must tell job applicants who live in New York.

In Washington, DC, members of Congress are drafting a bill that would require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission. The White House has proposed a Bill of Rights for Artificial Intelligence, which calls for more transparency and audits for accuracy and bias.

Legislation is being considered by European Union lawmakers to create a public registry of high-risk systems. China, Canada, Germany, and the UK have taken steps to regulate artificial intelligence.

Advertisement

A professor at New York University and member of the New York City Automated Decision Systems Task Force says she and her students looked at a hiring tool and found it assigned personality scores based on the software program they used to create their resume. Some studies have found that the hiring process favors applicants based on where they went to school, their accent, and whether or not there is a bookshelf in the background.

The auditing requirement in the New York City law is flawed because it only applies to discrimination based on gender or race. She says that the law doesn't discriminate on the basis of a person's name in their resume.

She says some of the tools are nonsensical. These are things we should know as members of the public. We are going to apply for jobs.

Proponents of greater scrutiny favor mandatory audits of software. Others prefer impact assessments. The field desperately needs standards for how reviews should be conducted and what they should include. Businesses could arrange for favorable audits without standards. Proponents say the reviews won't solve all problems associated with the software, but they will help hold the makers and users accountable.

A report by the AJL recommends creating a public repository of incidents where artificial intelligence caused harm and requiring disclosure when an artificial intelligence model is used. Auditors could use the repository to spot potential problems with algorithms. AJL co-founder Joy Buolamwini coauthored an influential audit that found that facial-recognition software works best on white men and worst on dark-skinned women.