The Guardian view on spooky science: AI needs regulating before it’s too late | Editorial

Although progress in AI will take some time, it doesn't mean that it is science fiction. According to Russell, the success of researchers in this field had spooked them. Professor Russell is the co-author and the most important textbook on artificial intelligence (AI). He will be giving the BBCs Reith lectures this year. His doubts seem to be growing more relevant.
AI is becoming a part of our society without much debate over its negatives. AI can now suggest online videos, perform surgery, and send people to prison. AI is a social enterprise and requires social limitations. However, the risks are not properly balanced. Two approaches are emerging to AI. One is to look at it in engineering terms. This means algorithms are trained for specific tasks. The second raises deeper philosophical questions about human knowledge.

Professor Russell is open to both of these views. Silicon Valley is a major proponent of the first, with AI being deployed to quickly get products to market and then dealt with later. AI has been able to achieve its goals even when they are not socially acceptable or are pursued without accountability. YouTube's role in radicalizing people is one example of the pitfalls of this approach. This is because there is not enough information about how it works. Professor Russell supports a system where machines can pause to ask for guidance from humans, as well as regulations to address systemic biases.

This academic supports global adoption of EU legislation to ban machines impersonating humans. Computers are closer to passing the Turing test, where machines trick people into believing that they are communicating with humans. Human knowledge is collective. To fool people, a computer must be able understand mutual understandings. OpenAIs GPT-3 is the most brilliant non-human writer. However, it cannot understand what it says. It and other AIs were tested by Oxford scientists this year. They produced false answers to questions that mimicked popular misconceptions and had the potential to deceive. One of OpenAIs researchers was so disturbed by this that he decided to leave the company and set up an AI safety laboratory.

Many argue that AI already has the ability to provide new insights that humans cannot. Human intelligence is more than just an algorithm. When a brilliant idea emerges, it is not explained by the logical consequences of previous steps. It is impossible to derive Einstein's theory of general relativity from the observations made in that time. This was only experimentally proved decades later. A few repetitions of a task can help humans learn new tasks. Machines have so far not been able to do this. AI is currently able to be prompted, but it cannot prompt itself into action.

AjeyaCotra, a tech analyst at the US-based Open Philanthropy Project, predicted that a computer capable of matching the human brain would be available by 2052. It would also come with a $1tn cost. It is essential that we find better ways of building it. We are now living in an age where it is harder to explain the actions of AI systems that are more powerful than humans. How do we know if an AI system is working for our benefit and if it is acting against our interests? These questions should give us all cause to pause and think.