It may sound like nothing more than a science fiction movie, but scientists who study artificial intelligence warn that it is a real possibility, and some say it will happen.
A majority of researchers think that there is at least a 50% chance that some people will live to see the day when there is an artificial intelligence. Some think it will happen within the next decade.
It will mark a profound change in the world when artificial intelligence reaches human-level intelligence. More advanced artificial intelligence could be created. It might become difficult at that point to control.
In 1997 a computer program called Deep Blue beat the World Chess grandmaster, Garry Kasparov, at his own game. The technology has been used to drive cars, diagnose cancer, and assist with surgery. It can even troll you on social media. It also helps many of us find our way home.
These are examples of narrow artificial intelligence, which is programmed for a specific task. A program that can beat a Go master can't drive a car, and an artificial intelligence that can spot a tumor can't translate Arabic into French. It isn't up to speed on everything people can do, even though it is often better than humans at the one thing it is trained to do. Unlike us, narrow artificial intelligence can't apply its intelligence to any problem or goal.
A general set of knowledge and skills could be applied to a variety of tasks. AGI would no longer depend on human-designed software to make decisions or accomplish tasks in the future. Because computers can evolve much faster than humans, this could result in an artificial intelligence that is far superior to human smarts. It could learn as it goes. That is what experts mean when they talk about artificial intelligence. We probably aren't even close at this point.
The field of artificial intelligence is probably less advanced than most people think, according to a recent post by a roboticist andentrepreneur.
It's important to note that we still don't know how the human mind works, says a robotics engineer and research fellow at the Human Futures Institute in Toronto. He describes himself as a bit bearish on the idea of an artificial intelligence.
AGI is going to happen unless we humans wipe ourselves out first, and others insist that it is difficult to avoid.
Many scientists, including Stephen Hawking, have warned of its potential dangers despite a lack of consensus. Nick Bostrom, a University of Oxford philosopher, warns in his book Superintelligence: Paths, Dangers, Strategies that the fate of our species could depend on the actions of this superintelligent machine.
That fate may not be a bad one. The experts say that superintelligent artificial intelligence could offer a solution to many of our problems. If we can figure out how to tackle climate change, eradicate poverty, and ensure world peace, maybe we can use artificial intelligence.
The founder of the Future of Life Institute, Tegmark, says that this remarkable technology has the potential to help everybody live healthy, wealthy lives so humanity can flourish like never before. It might wipe out humanity if its goals aren't aligned with ours.
We don't want to be taken by surprise. If someone told us that an alien invasion fleet was going to arrive in 30 years, we would be prepared. It could take at least three decades to figure out how to control this technology and make sure its goals are in line with ours, according to Tegmark. Tegmark argues that we need to be prepared to use it in the best interests of humanity.
It's possible that we can agree on our goals and interests. Tegmark is optimistic that we could agree on the basics and work together to protect ourselves from a threat. If the threat of a climate disaster isn't enough to bring humanity together, perhaps both the promise and peril of superintelligent artificial intelligence will be.