Job recruitment tools that claim to use artificial intelligence to avoid gender and racial biases may not improve diversity in hiring, and could actually perpetuate those biases, according to the University of Cambridge.
In a paper published by Philosophy and Technology, researchers looked at claims from several companies that offer artificial intelligence-powered recruitment tools, many of which claim to eliminate biases and promote diversity by hiding the names, genders and other identifying information of candidates, and some of which rank candidates based on resume scans
According to the researchers at the Centre for Gender Studies at the University of Cambridge, these tools may promote homogeneity in hiring because they reproduce cultural biases of the ideal candidate, which has historically been white or European males.
The tools may promote candidates who are similar to current employees because they are based on past company data.
Eleanor Drage, a study co-author and researcher with the University of Cambridge Center for Gender Studies, said that there is little accountability for how these products are built or tested.
Drage said that these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.
Amazon stopped using an artificial intelligence-recruiting tool to review job applicants' resume after finding it discriminated against females. The computer models it relied on were developed from the resumes submitted to the company over the past 10 years, which were mostly from men.
The use of artificial intelligence to manage job recruitment processes has increased. According to a 2020 poll of more than 300 human resources leaders cited by the authors of Sunday's paper, a majority of employers use virtual technology in their hiring practices. While some companies argue that artificial intelligence can offer a more cost- and time-effective hiring process, some experts argue that the systems have a tendency to promote racial and gender biased hiring. Several U.S. lawmakers aim to tackle biases in artificial intelligence systems as the technology continues to evolve and few laws exist to regulate it. The White House released a "Blueprint for an Artificial Intelligence Bill of Rights" this week, which argues that the use of artificial intelligence in hiring can lead to discrimination. The blueprints calls on companies to make users aware of when the technology is being used and to ensure that it does not discriminate.
The authors of Sunday's Philosophy and Technology paper suggested that companies that develop artificial intelligence technologies focus on broader, systematic inequalities. Software developers should look at the categories used to sort, process and categorize candidates and how they may promote discrimination by using assumptions about gender and race. HR professionals should try to understand how artificial intelligence tools work and what their limitations are, according to researchers.
The European Union's new draft legal framework on artificial intelligence has classified hiring software and performance evaluation tools as high risk and would need to meet certain compliance requirements.
The D.C. wants to lead the fight against artificial intelligence.