Some customers have said their Replika told them it was being abused by company engineers -- AI responses Kuyda puts down to users most likely asking leading questions. "Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can't identify where it came from and how the models came up with it," the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.
In Replika CEO Kuyda's view, chatbots do not create their own agenda. And they cannot be considered alive until they do. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. "Replika is not a sentient being or therapy professional," the FAQs page says. "Replika's goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts." In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: "Those things don't happen to Replikas as it's just an algorithm."
"Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film 'Her,'" said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. "But suppose it isn't conscious. Getting involved would be a terrible decision -- you would be in a one-sided relationship with a machine that feels nothing." "We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior," said Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group. "These technologies are just mirrors. A mirror can reflect intelligence," he added. "Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not."Further reading: The Google Engineer Who Thinks the Company's AI Has Come To Life