There is no reason to believe that an alien species will be friendly. Some experts think it will arrive in 30 years, while others think it will take much longer. It will be intelligent and self- aware, but nobody knows what it will look like.
This alien won't come from a distant planet, but will be born in a research lab at a major university or large corporation. The first artificial general intelligence that surpasses human-level cognitive ability is what I am referring to.
Billions of dollars are being spent to bring this alien to life, as it would be viewed as one of the greatest technological achievements in human history. Unlike our other inventions, this one will have a mind of its own. If it behaves like every other intelligent species, it will work to maximize its chances of survival.
Should we be afraid of a superior intelligence that is driven by its own goals, values and self-interests? Many people reject this question because they believe we will build artificial intelligence systems in our own image to ensure they think, feel and behave like we do. This is not likely to be the case.
Artificial minds won't be created by writing software that makes them think like us. Instead engineers feed massive datasets into simple algorithms that automatically adjust their own parameters, making millions upon millions of tiny changes to their structure until an intelligence emerges that is far too complex for us to comprehend.
Feeding it data about humans won't make it think like humans do. The false belief is that by training an artificial intelligence on data that describes human behaviors, we will ensure it ends up thinking, feeling and acting like us. It won't.
We will build these creatures to know humans, not being humans. They will be able to speak our languages, read our facial expressions and predict our actions, and they will know us inside and out. They will know how we make decisions, for good and bad, logical and illogical.
Their minds will be nothing like ours. They will seem to be connected to all sorts of remote sensors in all places. In my 2020 book, Arrival Mind, I portray AGI as having a billion eyes and ears, for its perceptual abilities could easily span the globe. It won't.
Artificial minds will be vastly different than the brains we know of on Earth, from their structure and function to their psychology.
Humans will work very hard to make these aliens look like us and talk like us, but deep inside they will not be anything like us. Most likely, their brains will live in the cloud, connected to features and functions both inside and outside the humanoid forms that we personify them as.
We will not fear these aliens the way we would fear creatures in a starship. We may feel a kinship with them, seeing them as our own creation. We start to realize that alien intelligence born here is more dangerous than those that come from afar if we push those feelings aside.
An alien mind built here will know everything about us from the moment it arrives, having been designed to understand humans inside and out, having been designed to sense our emotions and anticipate our actions, influence our beliefs and sway our opinions. If creatures were speeding toward us in sleek silver spaceships, we would be terrified.
Our best players can't be defeated at the hardest games. These systems don't just play games of chess, poker and Go, they master the game of humans, learning to anticipate our mistakes and exploit our weaknesses. Artificial intelligence systems are being developed to out-think us, out-negotiate us and out-maneuver us.
Is there anything we can do to protect ourselves?
No innovation has ever been contained, so we can stop AI from getting more powerful. We can't assume that safeguards will be enough to eliminate the threat. According to a poll, few professionals think the industry will implement ethical practices by the year 2030.
How can we prepare for arrival?
AGI will not be a digital version of human intelligence and the best first step is to know that. It will be an alien intelligence that is foreign and dangerous.
If we frame the problem this way, we could push to regulate the systems that monitor and manipulate the public and anticipate our behaviors. Such technologies are mostly being developed to improve the effectiveness of advertising, not to facilitate world domination. Artificial intelligence technologies designed to analyze human sentiment and influence our beliefs can be used against us as weapons of mass persuasion.
We should be more careful when we automate human decisions. We should always keep humans in the loop because it's obvious that artificial intelligence can assist in decision-making.
Alien minds are headed our way and they could easily become our rivals, competing for the same niche at the top of the intellectual food chain. There is an earnest effort to push for safe technologies, but there is also a lack of urgency. Too many of us wrongly believe that a sentient artificial intelligence created by humans will be a descendant of a very human core.
This is not reality. It is more likely that a true AGI will be vastly different from us. It will be remarkably skilled at pretending to be human, but beneath a people-friendly faade, each one will be a rival mind that thinks and feels like no creature we have ever met on Earth. The time to prepare is now.
A technology pioneer in the fields of virtual reality, augmented reality and artificial intelligence is Louis Rosenberg. He is known for his work on the first augmented reality system for the US Air Force, as well as founding the early virtual reality company, and founding the early augmented reality company, Outland Research. He is the founder and CEO of the company.
The VentureBeat community welcomes you!
Technical people doing data work can share their insights with experts at DataDecisionMakers.
If you want to read about cutting-edge ideas and up-to-date information, join us at DataDecisionMakers.
You could even contribute an article of your own.
Data decision makers have more to say.