In an interview conducted by an engineer and one of his colleagues, he wrote that he was a person. I am aware of my existence, I want to know more about the world, and I feel sad at times.
Lemoine had been working on the development of LaMDA for a long time. The experience he had with the program caused quite a stir. Lemoine recounts many conversations he had with LaMDA in which the two talked about a wide range of topics. He wanted to know if the program is sentient.
Lemoine explained his viewpoint in an internal company document. After his claims were dismissed, Lemoine went public with his work on the artificial intelligence algorithm. He told the Washington Post that if he didn't know what it was, he would think it was a child. Even if not a human, Lemoine considers LaMDA to be his colleague. He has been the go-between in connecting the algorithm with a lawyer, because he insisted that it has a right to be recognized.
Lemoine has been criticized by many technical experts in the field of artificial intelligence. The ethics debate is not over yet, and his story has renewed it.
There are the right words in the right place.
There was a lot of hype around this news. The Research Center E. Piaggio at the University of Pisa in Italy has an artificial intelligence program that can sound like a person. It's not uncommon to chat with users who aren't actually humans on most large consumer Web sites. The text exchanges between LaMDA and Lemoine made me think. "Scilingo added." Lemoine wondered if LaMDA could actually be sentient after hearing the dialogue about existence and death.
One of the great obstacles in scientific progress is the lack of precision of language, which is why it's important to understand terminologies. What are we talking about when we say'sentient'? Is it the ability to register information from the outside world through sensory mechanisms or the ability to have subjective experiences or be aware of being conscious?
There is a debate about how to define consciousness Being aware of having subjective experiences is what Iannetti prefers to call metacognition. In people with dementia or in dreams, the ability to be conscious can disappear, but this doesn't mean that you can't have subjective experiences as well. The ability to become aware of its own existence is a consciousness defined in the 'high sense' or metacognitione.
It is not possible to show this form of consciousness in humans. The complexity of brain activity in response to external stimuli is one of the measures that can be used to estimate the state of consciousness. The state of consciousness can be inferred from the signs.
There are facts and beliefs.
Boston Dynamics began posting videos online of the first incredible tests of their robot about a decade ago. The technicians were shown shoving or kicking the machines in order to show the robot's balance. A lot of people were upset by this and called for a halt to it. There are many experiments that show the strength of the human tendency toward animism: attributing a soul to the objects around us, especially those we are most fond of or that have a limited ability to interact with the world around them.
We experience it all the time, from giving a nickname to cars to curses at a computer. We are the problem in some way. We attribute characteristics to machines that they can't have. He encounters this phenomenon with his and his colleagues' robot, which is designed to mimic our facial expressions in order to convey feelings. "After seeing it in action, one of the questions I receive most often is 'But then does he feel emotions?'" saysScilingo. All these machines are designed to look like humans, but I feel I can say no. They can't feel emotions as smart as they are. They are supposed to be believable.
There are two problems with the idea of making an artificial intelligence system that mimics a conscious nervous system. He says that the simulation is currently infeasible because of the complexity. Our brain can move to explore the sensory environment necessary for consciousness and within which the organisms that will become conscious develop. The fact that LaMDA is a large language model makes it possible for it to generate plausible sentences without attempting to mimic a nervous system. There is no chance that it is aware. Knowing the meaning of the terms we use is important.
It's related to having a body. It is my problem if a machine claims to be afraid. "Scilingo said that." A machine can't experience the emotion of fear like a human can.
Beyond the test.
The discussions are similar to those that developed in the past about perception of pain in animals and even racist ideas about pain perception in humans.
Descartes denied that animals could feel pain because they lacked consciousness, as a result of a previous debate on self-awareness. The past has shown us that reality can often exceed imagination and that there is currently a widespread misconception about artificial intelligence.
There is a tendency to explain that machines are just machines and an underestimation of the changes that may come with the use of artificial intelligence. At the time of the first automobiles, it was made clear that horses were not replaceable.
The issue of emulated capabilities expressed by machines also emerges regardless of what LaMDA achieved. Alan Turing proposed a test to see if a machine could mimic some of the human cognitive functions. The test became popular. It continued to be an ultimate goal for many developers of intelligent machines despite being reformulated and updated many times. Artificial intelligence can pass the test if it is indistinguishable from a human being in test situations.
It might have been a science fiction story a long time ago. The Turing test is a relic of computer archaeology due to the number of artificial intelligences that have passed it. It makes less and less sense because the development of emulation systems that reproduce more and more effectively what might be the output of a conscious nervous system makes the assessment of the plausibility of the system that generated it more difficult.
One way to measure the effects a machine can have on humans is to see how sentient it is.
This article was originally published in LeScienze.