Prof Stuart Russell of the University of California at Berkeley began his lecture on living with artificial intelligence with an excerpt from a paper written by Alan Turing. Turing introduced many of the core ideas of what became the academic discipline of artificial intelligence, including the sensation of our own time, so-called machine learning.
Russell said that once the machine thinking method had started, it would not take long to outstrip our feeble powers. We should expect the machines to take control at some point. IJ Good, one of Turing's colleagues at Bletchley Park, said that the first ultra-intelligent machine is the last invention that man will ever make.
Russell is a world leader in the field and someone who believes in the current approach to building intelligent machines. The field's prevailing concept of intelligence is flawed because he regards it as fatally flawed.
One expert joked that he was worried about overpopulation on Mars the same way he was about super-intelligent machines.
Artificial intelligence researchers give machines specific objectives and then judge them on their success in achieving those objectives. This is probably fine in the lab. Russell says that when we leave the lab and move into the real world, we can't specify our objectives completely. The other objectives of self-driving cars, such as how to balance speed, passenger safety, sheep safety, legality, comfort, politeness, has turned out to be difficult to define.
It doesn't seem to bother the giant tech corporations that are driving the development of increasingly capable, remorseless, single-minded machines and their ubiquitous installation at critical points in human society.
If Russell's discipline continues on its current path, it will create super-intelligent machines that will be a nightmare. It is a scenario that is implicit in the philosopher Nick Bostrom's "paperclip apocalypse" thought-experiment and entertainingly mimicked in the Universal Paperclips computer game. It is also derided as implausible and alarmist by both the tech industry and researchers. One expert in the field joked that he was worried about super-intelligent machines like he was worried about overpopulation on Mars.
We already live in a world dominated by super-intelligent machines, so it is not in my lifetime to think that living in such a world is not possible. Corporations are the artificial intelligences in question. The collective IQ of the humans they employ dwarfs that of ordinary people and often of governments. They have a lot of resources. Their lifespans are much higher than that of humans. They have an objective to increase shareholder value. They will do whatever it takes to achieve that, no matter what ethical considerations are involved.
One such machine is called Facebook. Andrew Bosworth, one of its most senior executives, wrote an unambiguous statement of its objective on 18 June 2016 Period. All the work we do in growth is justified. There are questionable contact import practices. The subtle language helps people find friends. We have to do a lot of work to bring in more communication. We will likely have to do some work in China. All of it.
The future is not evenly distributed as William Gibson observed.
I have been reading.
There Is no "Them" is an entertaining online rant by Antonio Garca Martnez against the "othering" of west coast tech billionaires by US east coast elites.
Henry Farrell and Glen Weyl wrote a review about technology and the fate of democracy in the Boston Review.
Tim Harford wrote a post about parking tickets and corruption.