There were big waffles made earlier this year when a former engineer from the company told a reporter that a machine was sentient. The engineer's testimony sent the question of artificial intelligence sentience and the ethics of language modeling programs through public discourse.

The engineer said he knows a person when he talks to them. Whether they have a brain made of meat or not, it doesn't matter. Or if they have a lot of code. I chat with them. I decide what is and isn't a person by listening to what they have to say.

Artificial intelligence isn't conscious yet, and likely won't be in the future. Even though a lot of people want it to be, this is not true. What is the reason?

Many in the artificial intelligence industry hold the same beliefs as Lemoine. According to the Times, one prominent inventor thinks Openai's GPT-3 is sentient. There is a wide gap between those who think the machine is dead and those who think it is. It might not be obvious, but a bridge between the two showing how one would cross the threshold from non-believer to believer has existed for decades.

In the 1960s, an MIT researcher named Joseph Weizenbaum developed an automated mental health care system. It would simply repeat words it was fed, or ask "patients" to expand on their own thoughts, compared to the technology of today.

The people who used the machine took to it like it was. They were willing to give it information. They took solace in the thought that it offered. The machine was being treated as if it were a human, despite the fact that the person on the other side of the screen was not actually doing anything.

Humans, in their endless search to make meaning, constantly anthropomorphize non-human things, all the time.

"I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short experiences with machines." I didn't realize that short exposure to a computer program could make people delusional.

In this instance, painfully, existentially so, it's true. Humans constantly look at non-human things in a way that makes them feel good. The way we call them. The name of the network. We tell stories with animals and objects that think and act like humans. That's right, and even more obviously: Siri. So on. The tendency to read way too far into artificial intelligence is known as the Eliza effect. When it really is an equation, interpret those capacities as miraculously human.

The language used to describe the technologies doesn't help counter the effect. If you call a machine intelligent, you insist that it is. Intelligence isn't a synonym for sentience and is often equivocated.

When it comes to animals, we usually hold them in higher regard than others because they display some kind of behavior that we can identify with. They live in complex social groups and use tools. These creatures become a bit more human as humanity tends to measure intelligence strictly. In the case of artificial intelligence, a similar bias is built into the packaging, which is important because animals do think and feel, which is actually sentience.

According to Alison Gopnik, a Berkeley professor of psychology and a researcher in the university's artificial intelligence department, there is a better name for it.

She said that the computational capacities of the large language models don't make it more likely that they are sentient.

Whenever we give significant meaning to anything, we offer it significant power.

It's our proclivity to give these machines distinctly human features, be that names like Eliza, or human-ish voices, or physical attributes can reinforce that effect. It's possible that a version of the Eliza effect extends to other technologies. When it comes to advertisements on social media, it's often treated as mystery or magic. We talk about the things we don't know or the things we do.

These were not just happen. People made them. Information about people by people. Information in social media is usually bought and sold. The people doing the building and buying and selling have names as well.

There is a compelling argument against the capacity of the artificial intelligence to sendience. To that end, ascribing any machine any degree of true self-agency takes responsibility off the shoulders of whoever created it, as anyone concerned for the future would do well to remember.