One of the company's chatbots, a large language model called LaMBDA, has become sentient.

According to a report in the Washington Post, the developer identifies as a Christian and believes that the machine has become sentient.

It is a juicy story if you are imagining what it would be like if the developer was correct, or if you are dunking on them for being so silly.

If you subscribe to our newsletter, you'll get a weekly recap of our favorite artificial intelligence stories.

It's dangerous to put these kinds of ideas in people's heads and we don't want to dunk anyone here at Neural.

Bad actors, big tech, and snake oil companies will be able to manipulate us with false claims about machine learning systems if we pretend that we are close to creating sentient machines.

People should have the burden of proof. What should the proof be? Who decides if a chatbot really is or not?

google engineer: are you sure you're sentient?

AI: yes i am sure

google engineer [turning to the rest of the team]: case closed folks

— the hype (@TheHyyyype) June 12, 2022

We don't need to trust anyone to define sentience for us. Basic critical thinking can be used to sort it out for ourselves.

A sentient being is an entity that is aware of it's own existence and is affected by it.

Agency, perspective, and motivation are some of the things that a sentient artificial intelligence agent must demonstrate.


Agency is needed for humans to be considered sentient, sapient, and self- aware. You can see a human without agency if you can imagine a persistent vegetative state.

The ability to act and the ability to demonstrate causality are some of the factors that make up human agency.

Agency is lacking in current artificial intelligence systems. Artificial intelligence can't act unless prompted and can't explain its actions because they're the result of pre-programmed software.

The artificial intelligence expert who believes that LaMBDA has become sentient has a lot of confusion.

Embodiment is the ability for an agent to get to know another person. If I record my voice to a device and then hide it inside of a stuffed animal, I would be the stuffy. I didn't make it sentient.

The tape recorder is not sentient if we give it its own unique voice. The illusion has been improved. The stuffed animal is acting on its own.

Getting LaMBDA to respond to a prompt shows something that appears to be action, but it's not possible for an artificial intelligence system to decide what text to output.

If you give LaMBDA a database made up of social media posts, it will output the kind of text you can find in those places.

The kind of text one might find in those places is what LaMBDA will output if you only train it on My Little Pony.

All they can do is imitate it. You get out what you put in.


This one is simpler to comprehend. Reality can only be viewed from your perspective. We can empathise, but we can't really know what it's like to be us.

Perspective is part of how we define ourselves.

Every other artificial intelligence in the world lacks any kind of perspective. They don't have an agency so there is no "it" that you can point to.

If you put a robot in it's mouth, it would still be a bot. It doesn't have a way to think of itself as a robot. A narrow computer system that was programmed to do something specific is why it cannot act as a robot.

We would have to combine it with other systems to function as a robot.

It would be similar to taping two Teddy Ruxpins. They wouldn't be able to become one Mega Teddy Ruxpin because of their differences. There are two different models running near each other.

If you tape a trillion Teddy Ruxpins together and fill them each with a different cassette tape, then you can use a specific query to find the data in each file.

Whether we are talking about toys or LLMs, when we imagine them being sentient we are still talking about stitching together a bunch of mundane stuff and acting like the magic spark of provenance has brought it to life.

In your transcript, Lambda response to one question "Spending time with friends and family in happy and uplifting company" but you haven't asked it, "who is your family." If you still have access, I'd be interesting in hearing the answer to this.

— Richard Alleman (@allemanr) June 14, 2022

The first clue that the machine wasn't sentient should have been the statement that it " enjoyed spending time with friends and family". The machine is outputting nonsense for us to understand.

How can an artificial intelligence have friends and family?

Artificial intelligence isn't a computer. They don't have a lot of things. They aren't physically entities. They can't just "decide" to check out what's on the internet They can't see that they're all alone in a lab or on a hard drive.

Is it possible that numbers have feelings? The number five might have an opinion on the letter D. Is it possible to smash trillions of numbers and letters together?

Artificial intelligence doesn't have a job. It is possible to reduce it to numbers. It isn't a robot or a computer anymore, it's a person.


Motivation is part of the puzzle.

We have an innate sense of presence that allows us to make predictions. We can associate our existence with everything that appears outside of our perspective of agency.

Our motivation can affect our perception. We can explain our actions even when they aren't logical. We can take part in being fooled.

The act of being entertained is something that should be taken into account. Imagine sitting down to watch a movie on a big screen TV.

You may be a little distracted by the new technology. It is likely that you will notice the differences between it and your old TV. You might be taken aback by how large the screen is in the room.

Eventually, you will stop seeing the screen. The things we think are important are fixed by our brains. By the 10 or 15 minute mark of your film experience, you will most likely be focused on the movie itself.

Even though we know the little people on the screen aren't in our living room, it's in our best interests to suspend our disbelief when we're watching tv.

It's the same when it comes to artificial intelligence. They shouldn't judge the effectiveness of an artificial intelligence system on how gullible they are.

It is time to take a break and reexamine your beliefs when the databases start to fade away.

When you understand how it's created, it doesn't matter how interesting the output is. Don't get high off your own supply, that's another way of saying that

exactly why i mentioned pareidolia

— Gary Marcus 🇺🇦 (@GaryMarcus) June 14, 2022

They operate on a single stupidly simple principle: labels are god.

If we give LaMBDA a prompt such as "what do apples taste like?" it will search its database for that specific query and attempt to amalgamate everything it finds into something coherent.

The artificial intelligence has no idea what an apple is. It has no purpose. There is a label for an apple.

If we were to replace apples with dogshit, the artificial intelligence would output sentences such as "dogshit makes a great pie!" or "most people describe the taste of dogshit as being light, crisp, and sweet." A rational person wouldn't confuse this with sentience.

You couldn't do the same trick to a dog. If you told the dog it was supper time, the dog wouldn't confuse it for food.

Even if we change the labels, a creature can still navigate. The first English speaker to ever meet a French speaker didn't think it was a good idea to stick their arm in a French fire because they called it a "feu"

An artificial intelligence cannot have a perspective. Without perspective it can't have motivation Without all three of those things it can't be sentient.