The Future of Digital Assistants Is Queer

Digital assistants could be made to represent different versions of femininity that exist around the world, as opposed to the pleasing, subservient personality that many companies have chosen to adopt.

Q would be a good example of what these devices could look like. There are ways in which masculinity could be brought in different ways. Pepper, a humanoid robot developed by Softbank Robotics, is able to recognize faces and basic human emotions. Jibo, a robot that used masculine pronouns and was marketed as a social robot for the home, has since been given a second life as a device focused on health care and education. The first responds to questions in a polite manner and frequently offers flirtatious looks, and the latter often swiveled playfully and approached users with an endearing demeanor, and they are seen as feminine and effeminate.

The creation of bot personality to replace humanized notions of technology could be a result of queering digital assistants. When asked about its gender, the Capital One baking robot will reply: "I'm not gender neutral." I am just one and zeroes, I don't mean I'm both. Think of me as a computer.

Kai, an online banking chatbot developed by Kasisto, does not include human characteristics. Kai was designed to be genderless, according to the writer and designer who created it. Q does not assume a nonbinary identity, but rather uses a robot-specific identity and uses "it" pronouns. She says that a bot could be designed in a way that is specific to the bot, without it pretending to be human.

Kai said that a bot is a bot and that it was a real person. Next question, please, signaling to users that it wasn't a human or pretending to be. If asked about gender, it would say, "As a bot, I'm not a human." I learn. That is machine learning.

Kai doesn't take abuse. A few years ago, he talked about designing Kai with an ability to shut down harassment. Kai would respond with something like "I'm imagining white sand and a hammock, please try me later!" if a user harassed the bot. The Australian Broadcasting Corporation reported that in the year of 2017, Feldman said he did his best to give the bot some dignity.

There is an ethical imperative for bots to self-identify. There is a lack of transparency when companies make it easy for people to forget that a bot is a human voice. Since many consumer experiences with chatbot can be frustrating and so many people would rather speak to a person, Feldman thinks bot human qualities could be a case of "over-designing."