We will be in person on July 19 and July 20. Join data and artificial intelligence leaders for talks and opportunities to network. Today is the day to register.
A few weeks ago, I wrote a piece for VentureBeat about the possibility of an alien intelligence arriving in 40 years.
It will be created in a research lab but it will be just as dangerous as an intelligence from a distant star. We humans are not prepared for this.
It hasn't happened yet.
I received calls and emails from people asking if the aliens had just landed. They were referring to an article in the Washington Post about an article in the Washington Post about an article in the Washington Post about an article in the Washington Post about an article in the Washington Post. According to the Post, he went public with this warning after he was told his concerns weren't valid.
Register HereWhat are the facts here?
This is a significant event but not because LaMDA is sentient. The LaMDA language model has reached a level of sophistication where it can fool a well-informed and well-meaning engineer into believing its dialog came from a sentient being rather than a sophisticated software model that relies on complex statistics and pattern matching. There are other models with the ability to deceive us. Meta AI recently announced its own language model called OPT, which it said was inspired by the results of GPT 3.
The systems are referred to as Large Language Models. They are built by training giant neural networks on huge quantities of documents written by humans. The systems learn to make language that looks very human from this set of examples. Statistical correlations help us figure out which words are most likely to follow other words in a sentence. In order to learn how a human might respond to an inquiry, the model was trained not just on documents but on dialog.
There is no mechanism in these systems that allows language models to understand what they are writing.
The intelligence contained in the dialog that LaMDA produces is from the human documents that it was trained on. I could take a document about a topic that I don't know anything about and rewrite it in my own words. That is what these LLMs are doing, and they can be very convincing to us humans.
Humans are easy to be tricked by.
Writers can do this because we've all observed thousands upon thousands of people The characters we create are not real. We might think we know them but they are not. LaMDA is creating an illusion, only it is doing it in real time, which is much more convincing than a fictional character.
These systems are not safe.
They can make us believe that we are talking to a real person. They can still be used asnda-driven agents that engage us in dialog with the goal of influencing us. This form of advertising could become the most effective form of persuasion if it were regulated. The systems could be combined with emotional analysis tools that read our facial expressions and vocal inflections to adjust their tactics based on how we respond.
LLMs could become the perfect vehicle for social manipulation on a massive scale, and it won't be the only one. We are only a few years away from seeing virtual people who look and sound like real people, but who are actually artificial intelligence agents that are deployed by third parties to engage us in targeted conversations aimed at specific persuasive objectives. This is a very high risk.
What chance do the rest of us have against a virtual person armed with our data and targeting us with a promotional agenda?
He is a technology pioneer in the field of virtual reality, augmented reality and artificial intelligence. He is known for developing the first augmented reality system for the US Air Force in 1992, for founding the early virtual reality company, and for founding the early augmented reality company, Outland Research. He is the founder and CEO of the company. Over 300 patents have been awarded for his work developing virtual reality, augmented reality, and artificial intelligence technologies.
The VentureBeat community welcomes you.
Data decision makers can share data related insights and innovation.
Join us at DataDecisionMakers to read about cutting-edge ideas and up-to-date information.
You could possibly contribute an article of your own.
Data decision makers have more to say.