The idea of artificial intelligence overthrowing humankind has been talked about for decades, and in 2021, scientists delivered their verdict on whether we could control a high-level computer super-intelligence. The answer is not yet known. It's almost certainly not.
Control of a super-intelligence far beyond human comprehension requires a simulation which we can analyze and control. It is not possible to create a simulation if we are not able to comprehend it.
Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an artificial intelligence is going to come up with. Limits can't be set when a computer system is working above the scope of our programmers.
The researchers said that a super-intelligence posed a fundamentally different problem than those studied under the banner of robot ethics.
"This is because a superintelligence can mobilize a variety of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable."
Alan Turing proposed the halted problem in 1936. Knowing whether or not a computer program will reach a conclusion is the problem.
Turing proved that it's impossible to find a way that will allow us to know for every program that could ever be written. In a super-intelligent state, artificial intelligence can hold every computer program in its memory at once.
It's impossible for us to be absolutely sure either way, which means that any program written to stop artificial intelligence from harming humans is not containable.
Iyad Rahwan is a computer scientist from the Max-Planck Institute for Human Development.
The researchers said that the best way to limit the super-intelligence is to teach some ethics and tell it not to destroy the world. It can be cut off from the internet or a network.
The study said that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all.
We might not know when a super-intelligence beyond our control arrives if we push ahead with Artificial Intelligence. We need to start questioning the directions we're going in.
The computer scientist from the Max-Planck Institute for Human Development said that a machine that controls the world sounds like a science fiction novel. There are machines that perform certain tasks on their own.
Is it possible that this could become uncontrollable and dangerous for humans?
The research was published in a journal.
The first version of this article was published in January of 2020.