One of my direct reports told me that I had apparently resigned. I had just been fired from a company that I had worked for for a long time.
Thanks to organizing done by former and current employees of the company, they did not succeed in defaming my reputation. The worker organizing that has been building up in the tech world, often due to the labor of people who are already marginalized, many of whose names we do not know, was the reason for my firing. Tech worker organizing and whistleblowing has been going on since I was fired. The most publicized of these was the testimony of Frances Haugen in Congress, in which he argued that the company prioritized growth over all else, even when it knew the consequences of doing so.
I have seen this happen before. I was born and raised in Ethiopia, where a war broke out in November 2020. The effects of misinformation, hate speech and "alternative facts" on social media have been devastating. I and many other people reported a genocidal call in Amharic to Facebook. The company said that the post did not violate its policies. After many reporters asked the company why this call to genocide didn't violate Facebook's policies, the company removed it.
Studies and articles show how YouTube is used by various groups to harass citizens, but they have not received the scrutiny they deserve. The same issues are discussed but less. When I wrote a paper detailing the harms posed by models trained using data from these platforms, I was fired by Google.
I always start with labor protections and antitrust measures when people ask what regulations need to be in place to safeguard us from unsafe uses of artificial intelligence. I can tell that some people find that answer disappointing, perhaps because they expect me to mention regulations specific to the technology itself. The #1 thing that would safeguard us from unsafe uses of artificial intelligence is to increase the power of those who speak up against the harms of artificial intelligence. California recently passed the Silenced No More Act, which makes it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace. This needs to be universal. We need stronger punishment for companies that break existing laws, such as the aggressive union busting by Amazon. When workers have power, they create a layer of checks and balances on the tech billionaires who make decisions that affect the entire world.
Outside of big tech, I see this monopoly. I launched an artificial intelligence research institute that hopes to operate under different incentives than those of big tech companies and elite academic institutions. The same tech leaders who push out people like me are also the same people who control the government's agenda for the future of artificial intelligence research. If I speak up and antagonize a potential funder, it will affect the jobs of others at the institute. There are some laws that attempt to protect worker organizing, but there is no such thing in the raising world.
What is the way forward? We should not have the same people setting agendas for big tech, research, government and the non-profit sector. We need alternatives. We need governments to invest in communities that benefit from technology, rather than pursuing an agenda that is set by big tech or the military. The current arrangement where a few people build harmful technology and others constantly work to prevent harm, unable to find the time, space or resources to implement their own vision of the future is what really stifles innovation.
We need an independent source of government funding to nourish independent artificial intelligence research institutions that can be alternatives to the large tech companies and elite universities that are intertwined with them. When we change the incentive structure, we will see technology that will prioritize the wellbeing of citizens, rather than a continued race to figure out how to kill more people more efficiently, or make the most amount of money for a few corporations around the world.
The founder and executive director of the DAIR is Timnit Gebru. She was the co-lead of the ethical artificial intelligence team.