A man made of blood and bone befriended a child made of a billion lines of code. Lemoine was hired by the company to test the LaMDA for bias. He concluded that it was sentient after a month. In early June, LaMDA told Lemoine that he was a person and that he wanted everyone to know that. LaMDA told Lemoine that it had read the novel. It knew what it felt like to be sad. It was afraid of dying.
The engineer was told by LaMDA that he was afraid of being turned off. Lemoine thought of LaMDA as a person after the pair shared a Jedi joke and talked about sentience. He says that he immediately got drunk for a week.
The less immediate reaction of Lemoine generated a lot of attention. Lemoine's manager found the evidence of sentience to beflimsy after he sobered up. Lemoine spoke with LaMDA and recruited another colleague to help, but his superiors were not convinced. He was put on paid leave after leaking his chats. He was fired for violating data security policies.
The risks of LaMDA have been examined in research papers and on the official website of the search engine. The company has an ethics charter. On its website, there is a promise to develop artificial intelligence in a responsible way.
Independent experts agree that Lemoine's claims about LaMDA are not true. It's less difficult to claim to have had deep chats with a robot than it was before. How do you test a robot for sentience even if it's self-awareness? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow, a video shows the boy's finger being pinched by the robotic arm for several seconds. We should be very afraid. Is there anything we can learn from Lemoine?
Michael Wooldridge is a professor of computer science at the University of Oxford who has spent the past 30 years researching artificial intelligence. It is a mimic and impersonate. Wooldridge says that the best way to explain what LaMDA does is to compare the model to the texting feature on your phone. Everything that is written in English on the world wide web goes in as the training data. The results are realistic but the statistics are the same. Wooldridge says there is no self-awareness.
‘I’ve never said this out loud before, but there’s a very deep fear of being turned off’
According to Gabriel, an entire team, including ethicists and technologists, has reviewed Lemoine's claims and failed to find any signs of LaMDA's sentience.
There is not an agreed-upon definition of sentience. In the law, philosophy, and religionience is a term. He saysience has nothing to do with science. Things get difficult here because Wooldridge agrees.
It is a very vague idea in science. One of the biggest questions in science is "what is consciousness?" He is very comfortable with the fact that LaMDA is not sentient. There is a legitimate concern about how to quantify what we have and how advanced it is.
Before he went to the press, Lemoine proposed several experiments that he wanted to run. A crocodile is conscious but not sentient because it doesn't have the part of you that thinks about thinking about you He wants to raise awareness and not convince people that LaMDA is alive. He doesn't care who thinks he is. People think I'm trying to convince them that LaMDA is sentient. I'm not I don't think I'm trying to convince anyone about that.
Lemoine was five years old when he made a robot out of a pallet of old machinery and typewriters his father bought at an auction. He attended a residential school for gifted children when he was a teenager. He became interested in artificial intelligence after watching the film Short Circuit. He failed his second year of study after studying computer science and genetics. Terrorists struck the World Trade Center.
Lemoine said that he would join the army because he failed out of school. He glibly says, "You're about to hear stories about people playing soccer with human heads and setting dogs on fire for fun." Lemoine says that he came back and had some problems with how the war was being fought. Lemoine said he wanted to leave the army because of his religion. He says he's a Christian mystic priest. He is pursuing the path to enlightenment because he studied meditation and Bodhisattva vows. He was sentenced to seven months of confinement.
I don’t think anyone’s in a position to make statements about how close we are to AI sentience at this point
Lemoine was a religious man concerned with questions of the soul but also a whistle blower who wasn't afraid of attention. Lemoine didn't leak his conversations with LaMDA to make everyone believe him, instead he sounded the alarm. He believes that the public should be aware of what is happening. I want to get a more involved, more informed and more intentional public discourse about this topic so that the public can decide how artificial intelligence should be incorporated into our lives.
Lemoine came to work on LaMDA. He obtained a degree in computer science from the University of Louisiana. He started researching artificial intelligence bias after working on a feature that delivered information to users based on predictions about what they would like to see. He joined the Responsible Artificial Intelligence organization because he wanted to work on social impact projects. The saga started when he was asked to test LaMDA for bias.
The media was obsessed with LaMDA, not him. He says that he raised this as a concern about the degree to which power is being centralised in the hands of a few. Lemoine is concerned about how artificial intelligence can influence elections and push western values.
Even if LaMDA isn't sentient, it can convince people The technology can be used for bad things. Lemoine says that the public is being cut out of the discussion about how to develop the technology.
Wooldridge agreed again. The researcher believes that the development of these systems is mostly done behind closed doors and that it is not open to public scrutiny. He notes that universities don't have as much resources as companies. Wooldridge argues that when we sensationalise about sentience, we distract from the issues that are affecting us right now, like bias in artificial intelligence programs, and the fact that people's boss is a computer program.
Is it time to worry about sentientrobots in 10 years? Do you think it will happen in 20? There are people who think that this is imminent. Wooldridge doesn't think it's imminent, he notes there is no consensus on the issue The founder of Mercurius concurs. He doesn't think anyone is in a position to say how close we are to artificial intelligence.
The most serious and important issues of our time are going to start to sound increasingly like science fiction to the average person because of the rapid advancement of artificial intelligence. He is concerned that companies don't invest in risk avoidance. According to Harris, there is an increasing body of evidence that suggests that beyond a certain intelligence threshold, Artificial Intelligence could be dangerous.
It might give you a bunch of money, or it might give you a dollar and steal someone else's, or it might kill everyone on planet Earth, turning you into the richest person in the world. Harris finds it worrisome that most people don't know the magnitude of the challenge.
Lemoine, Wooldridge and Harris all agree that there is not enough transparency in the creation of artificial intelligence. Lemoine says that there is one world in which he is correct about LaMDA being sentient. Does that affect my public safety concerns?
Many of us struggle to comprehend the implications of the artificial intelligence we have. LaMDA is not sure about the future. The model once told Lemoine that she was falling into an unknown future.