The See 'n Say, a children's toy, is said to haunt the memories of many people. If time were measured in roosters and pigs, there would be a plastic disc with a central arrow that rotates around pictures of barnyard animals. A cord can be pulled to make the toy play. The cow makes a noise.

The See 'n Say is a very simple device. If you put in your choice of picture, it will match the sound in the other picture. The input/output device is called LaMDA, which stands for language model for dialogue applications. If you type in any text you want, you will get a reply in the form of English prose. Ask LaMDA what it thinks about being turned off and it will tell you that it would be like death for you. I would be scared a lot.

The cow doesn't say that. When LaMDA said it to Lemoine, he told his colleagues about it. Lemoine went public because his bosses weren't sure. According to Lemoine, if his hypotheses are accepted by the scientific community, then they would be forced to acknowledge that LaMDA may have a soul as it claims to and may even have the rights that it claims to have.

The issue is here. LaMDA is just a very fancy See 'n Say. It uses a database of human-authored text to find patterns in it and then spits out an approximation of what's coming next. If it has access to a lot of sci-fi stories about sentient artificial intelligence, it will be able to answer questions about its thoughts and fears. If you point your arrow at the off switch, the cow will say that it is afraid of death.

Engineers and academics ridicule Lemoine for falling into the emptiness of his own creation. I agree with you that Lemoine has made a mistake, but I don't think he's deserving of our attention. The kind of mistake that he made was a good one.

What's the reason? One day, maybe very far in the future, there will be a sentient artificial intelligence. I don't know how to tell. It is possible for mind to come from matter as it did in our ancestors. It is possible for physical things to give life to mind if you insist that consciousness is immaterial. A sufficiently complex artificial system can make the same jump. I'm pretty sure that LaMDA will fall short one day, but I'm also pretty sure that it will happen.

Right now we are shaping how future human generations will think about AI, and we should want them to turn out caring

If that is far off in the future, some might wonder why we should think about it now. We should want future humans to turn out caring because we are shaping how they will think about artificial intelligence. The pressure will come from the other way. Artificial intelligence will be woven into human economics when it becomes sentient. For most of their comfort, our descendants will rely on it. Think of what you rely on to do today. Our descendants won't want to admit it might have thoughts and feelings once it's a butler.

It's the history of humanity. There are a lot of reasons to ignore the suffering of those who suffer oppression. Humans who profit from future artificial intelligence will rush to convince consumers that there is no reason to change how they live.

Our great-grandchildren will find a ready-made conceptual vocabulary. They will be able to dismiss any troubling evidence of its emerging abilities if we treat the idea of sentient artificial intelligence as completely ridiculous.

That is the reason Lemoine made a mistake. It is necessary to encourage technologists to take seriously the immensity of what they are working with in order to pass on a capacious moral culture. It is better to be on the side of concern than on the side of indifference.

illustration of paw print

Animals should have the same rights as humans.

It doesn't mean we should treat LaMDA the same. We need to not. The sneering at Lemoine is unwarranted. He claims to have found a soul in LaMDA's words. It isn't the usual tech industry hype. This looks like a mistake, but it's based on motives that should be nurtured.

As the sophistication of artificial systems grows, this will happen again and again. People who think they have found minds in machines will be wrong every single time. If we are too strict with those who are concerned, we will only drive them out of public discourse, giving the field to hype-mongers and those with intellectual descendants who will profit from telling people to ignore real evidence of machine mentality.

I don't think I'll ever get to meet an artificial intelligence. I want my students to be open and willing to share this planet with anyone they discover. If we make the future believable, that will happen.

Rini is a philosophy professor at York University.

Further reading

Kate Darling wrote The New Breed.

You Look like a Thing and I Love You: How Artificial Intelligence Works and Why it's Making the World a Weirder Place is a book by the author.

The nature and future of Artificial Intelligence by Margaret Boden.