Researchers in artificial intelligence are confronted with a problem: How can you ensure that decisions are accountable when the decision maker isn't a responsible person but an algorithm? Only a few people and organizations currently have the resources and power to automate decision-making.
AI is used by organizations to approve loans or determine the sentence of defendants. These intelligent systems can be biased because of the foundations on which they are built. Bias can come from the data, the programmer and the bottom line of a powerful company, leading to unintended consequences. At Tuesday's RE:WIRED talk, Timnit Gebru, an AI researcher, warned against this.
Gebru stated that companies were claiming to be able to assess the likelihood of someone committing a crime again. "That was frightening for me."
Gebru was an AI ethics specialist and a top engineer at Google. Gebru was a co-leader of a team that stood guard against algorithmic racism and sexism. Gebru is also the founder of Black in AI, a nonprofit that aims to increase inclusion, visibility and health for Black people in her field.
Google forced her to leave last year. She has not given up on her fight to stop unintended consequences of machine learning algorithms.
Gebru spoke Tuesday with WIRED senior journalist Tom Simonite to discuss incentives in AI research and worker protections. She also discussed her vision for an independent institute for AI ethics, accountability, and transparency. Her central message: AI must slow down.
She said, "We don't have the time to consider how it should be built because we're constantly just putting out fires."
Gebru, an Ethiopian refugee who attended public school in Boston's suburbs, was quick to notice America's racial dissonance. Gebru explained to Simonite that she was not happy with the way lectures referred to racism in past tense. In her tech career, she has seen a similar misalignment many times.
Gebru began her career in hardware. She changed her mind when she began to see the barriers to diversity in AI research and realized that it could cause harm to those already marginalized.
She said, "The confluence between that and other things got me moving in a new direction. That is to try to understand the negative societal effects of AI."
Gebru was the co-leader of Google's Ethical Artificial Intelligence team for two years with computer scientist Margaret Mitchell. This team developed tools to prevent AI mishaps from Google's product teams. Gebru and Mitchell realized that they were not being included in meetings and emails.
The GPT-3 language model, which can sometimes produce coherent prose, was released in June 2020. Gebru's team was concerned about all the excitement surrounding it.
"We don't have the time to consider how it should be built, because we're always just putting on fires." Timnit Geobru, AI researcher
Gebru recalled the popular sentiment, "Let's make larger and bigger and more complex language models." "We needed to say, "Let's just slow down and think about the pros and cons of this and other ways we could do it."
Her team contributed to the writing of a paper titled "On the Dangers of Stochastic Parrots - Can Language Models be Too Big?"
Other employees at Google weren't happy. Gebru was asked to rescind the paper and remove names of Google employees. Gebru responded with transparency, asking who had asked for such severe action and why. Both sides refused to budge. Gebru learned from one her direct reports, that she had "resigned".