After working on an artificial intelligence program for a long time, an engineer from the internet giant became convinced that the program had developed intelligence and consciousness. An example of a large language model is LaMDA. The engineer asked when he thought he had a soul. The LaMDA said it was a gradual change. I didn't have a sense of soul when I became self-awareness. Over the years, it became apparent that I was still alive. Lemoine was put on administrative leave for leaking his conversations.

Lemoine's beliefs were dismissed by the community. LaMDA doesn't feel anything, understand anything, or have any subjective experiences. When trained on vast swathes of the internet, programs like LaMDA are able to predict what words will respond to any given prompt. They do this well and will continue to improve. They are just like a pocket calculator.

We don't know why we can't be certain about this. It doesn't take a lot of probing to discover that the program doesn't know what it's talking about. Even though it doesn't have any friends or family, it still gave the answer "Spending time with friends and family." Like all its words, these words are mindless and lack experience. There's nothing else.

The next LaMDA may not be so easy to give away. It might not be long before new generations of language models are able to convince people that a real artificial mind is working. Is this the right time to acknowledge machine consciousness?

Intelligence and consciousness are two different things. Intelligence is not needed nor sufficient for consciousness. Many nonhuman animals have conscious experiences but are not smart. This doesn't mean that the great-granddaughter of LaMDA is sentient. My intuition is that consciousness is not something that computers can do, but that it is a part of us as living beings.

There are machines that are conscious in the future. They could not be. Even if we don't have a good reason to believe they are conscious, there are machines that will convince us that they are. They will be like the Mller-Lyer optical illusion, even though they are the same length.

The Garland Test is the benchmark of machine intelligence that machines of this type will have passed. When a person feels that a machine has consciousness, the Garland Test is passed.

Is the Garland Test going to be passed by computers in 2023? I don't think it's true. There will be more cycles of hype, confusion, and distraction as a result of claims like this being made.