This comic is based on the episode Warped Reality on the TED Radio Hour.

Panel 1: "My name is Joy Buolamwini. I'm a poet of code on a mission to stop an unseen force that's rising. A force that I call the coded gaze – my term for algorithmic bias. Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace."
Panel 2: "When I look at algorithmic bias, what's potentially more nefarious is you don't have to intend to deceive or do harm. In fact, we can fool ourselves into thinking because it's based on numbers, it's neutral."
Panel 3: "The deception can be our own belief in a neutral system that doesn't actually exist in practice. That's because what we're training these systems on is a reflection of the inequalities in the world. Something I call power shadows."
Panel 4: "You might have a data set that's 75% male faces and over 80% lighter-skin faces. So what it means is the machine is learning a representation of the world that is skewed. But how are we getting such skewed data sets?"
Panel 5: "Oftentimes, people are gathering the data that's most readily available – focused on people who are public figures or public officials. So you're going to have an overrepresentation of white men."
Panel 6: "This is where the power shadows come in. Your selection of what's easiest to gather, what's most readily available, what's viewed as credible, is being shaped by social, cultural and political factors."
Panel 7: "This is where collective action is important. We need to have systemwide change so that companies can't operate with impunity. We're starting to see more bills come out around this. In Illinois, you have a bill where if an AI  system is being used in hiring, it has to be disclosed that this is being used in the first place."
Panel 8: "Another step is saying before it can even be used, it has to be proven to show nondiscrimination. It could be in violation of Title VII of the Civil Rights Act. If you are able to say technology on the whole has done well, it probably means you're in a fairly privileged position."
Panel 9: "I always ask 'who can afford to say that?' The kids who are sitting in a McDonald's parking lot so they can access the internet to be able to attend school remotely? That has never been their reality. In the ideal future, before any kind of algorithmic decision-making system is even created, we're already in conversation with those who are going to be most impacted."
Panel 10: "When I critique tech, it's really coming from a place of having been enamored with it and wanting it to live up to its promises. I think that's a more optimistic approach than to believe in wishful thinking that isn't true."

Joy Buolamwini is a graduate researcher at the Massachusetts Institute of Technology. She founded the Algorithmic Justice League to create a world with more ethical and inclusive technology.

This story was contributed to byKatie Monteleone.