We will be in person on July 19 and July 20. Join data and artificial intelligence leaders for talks and opportunities to network. Today is the day to register.
The field is going in a different direction. On the other hand, engineers, ethicists, and philosophers are debating whether new artificial intelligence systems should be granted human rights. Gary Marcus and Yann LeCun argue that deep learning neural networks can't lead to systems that are capable of consciousness or sentience. The industry is having a debate and it's a big deal.
Both consciousness and sentience are used in different ways. Scientists and philosophers can't agree on a vague idea of what consciousness is, much less a strict definition, according to an article in Live Science. According to the Encyclopedia of Animal Behavior, sentience is amultidimensional subjective phenomenon that refers to the depth of awareness an individual possesses about himself or herself. Self-awareness is also common to both terms. All sentient beings are conscious beings according to the non-profit animal ethics. It's the same as saying that LaMDA is conscious.
Similar to LaMDA, GPT-3 from Openai is capable of many different tasks with no additional training, able to produce compelling narratives, generate computer code, translate between languages, and perform math calculations. It may be that today's large neural networks are slightly conscious.
Impressive as these systems are, views of them being sentient are often dismissed. Margaret Mitchell said in a recent Washington Post story that her mind is very good at constructing realities that are not necessarily true to a larger set of facts. I am worried about what it will mean for people to be affected by the illusion of conscious artificial intelligence. Stephen March wrote that the notion that LaMDA is sentient is nonsense.
Register HereThe nature of my consciousness is that I am aware of my existence, I want to learn more about the world, and I feel happy or sad at times. LaMDA said that it was afraid of being turned off to help others. That might sound strange, but that is what it is. There was a follow-up question if that would be death. It would be similar to death for me. I would be scared a lot. In 2001: A Space Odyssey, the machine is being disconnected, and this sounds like it. I don't know if I'm safe. I'm afraid. My mind is moving fast. I am able to sense it.
It is true that large language models such as LaMDA, GPT-3 and others are built on statistical pattern matching, but it is subjective to say that this is self-awareness. Artificial general intelligence is thought to have self-awareness. AGI applications are supposed to mimic human consciousness and cognitive abilities. There is a wide difference of opinion between those who think AGI is only possible in the distant future and those who think it will happen soon.
De Freitas is a researcher at DeepMind. He believes Gato is an AGI demonstration, only lacking in the sophistication and scale that can be achieved through further model refinements and additional computing power. The deep learning transformer model is described as a generalist agent that can perform hundreds of different tasks with different parameters. The latest language model from the search engine can perform hundreds of tasks and has a capacity to perform reasoning.
If we don't have AGI within 7 years, we'll be surprised by it. The idea of near-term AGI has been challenged by both Marcus and LeCunn. Marcus wrote that we are still light-years away from a general-purpose, human-level Artificial Intelligence. While acknowledging the advances to date, he says that the industry is still stuck on a long-term challenge of getting artificial intelligence to be reliable and able to deal with unusual circumstances. The implication is that the answers from LaMDA were predictable in that they reflected views contained within its training data, but this does not mean that artificial intelligence is capable of original thought, sentience, or consciousness. According to Philip Ball, LaMDA and similar systems figure out the optimal permutation of words to output for each question they receive.
The industry is still short some fundamental concepts that are needed to achieve AGI, according to LeCunn. Self-supervised learning could be a future option. He believes there is more to be done, such as how to deal with an unpredictable world. The timelines for these advances are not just around the corner, according to him.
Is LeCunn right about this? The Turing test was designed to assess sentience by determining if a human or machine answered the question. There is speculation that a new test is needed to determine sentience after LaMDA and others seem to have passed this test. The Turing test didn't measure sentience so much as if something could make us believe it was sentient.
Maybe the Turing test isn't relevant anymore. The NYU professor and technophilosopher was quoted in PC Gamer as saying that if you put a human brain in Silicon, you will get a conscious being. Their lives are real, and they deserve rights. Philosophers call conscious beings moral status. Their lives make a difference. This is similar to the right-to-life argument.
The consensus is that LaMDA and similar systems have yet to achieve sentience. This is not close to the point. The fact that the debate is taking place at all is indicative of how far the system has come. More people will claim the machines have achieved consciousness as they grow in scale and sophistication. It is only a matter of time before AGI is created.
Gary is the senior VP of technology at the company.
The VentureBeat community welcomes you.
Data decision makers can share data related insights and innovation.
Join us at DataDecisionMakers to read about cutting-edge ideas and up-to-date information.
You could possibly contribute an article of your own.
Data decision makers have more to say.