After an apple fell on his head, IsaacNewton discovered his second law. He realized there was a relationship between force, mass and acceleration after a lot of experimentation and data analysis. He formulated a theory to describe that relationship and used it to predict the behavior of objects other than apples. If not always precise enough for those who came later, his predictions turned out to be correct.
Science is being done more and more today. Machine learning tools from Facebook predict your preferences better than psychologists. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of the structure of the human body. Both are silent on why they work.
Humans are deeply uncomfortable with theory-free science.
You can't look into the mechanism if you lift a curtain. They don't have a set of rules for converting this into that. They work and do well. We see the social effects of predictions on a daily basis. AlphaFold has yet to have an impact, but many believe it will change medicine.
Theory took a back seat. Chris Anderson was the editor-in-chief of the magazine at the time. He argued that computers were better at finding relationships than we were, and that our theories were being exposed for being oversimplifications of reality. The old scientific method would soon be forgotten. We would stop looking for causes and be satisfied with correlations.
The apple tree is apocryphal. The Historical Picture Archive/Alamy has this picture.
We can say that what Anderson saw is true with the benefit of hindsight. The complexity of the data can't be captured by theory. Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tbingen, Germany, says that they have leapfrogged over their ability to even write theories that are useful for description. We don't know what they would look like.
Anderson had predicted the end of theory, but it looks like it was premature. There are several reasons why theory is not dead despite the success of prediction engines like Facebook and AlphaFold. They force us to ask: what is the best way to acquire knowledge and where does science go from here?
The first reason is that artificial intelligences are fallible because they learn from data without having to be fed instructions. Think of the prejudice that has been documented in the search engines.
Humans are deeply uncomfortable with theory-free science. We want to know why we don't like dealing with a black box.
There is still a lot of theory that is graspable by humans that can be useful, but has yet to be uncovered.
Theory is changing, but it is not dead yet. When you have a lot of data, the theories that make sense look different from the ones that make sense when you have small amounts.
Neural nets have been used to help improve the theories in his domain, which is human decision-making. The theory of how people make decisions when economic risk is involved was formulated by Daniel Kahneman and Amos Tversky in the 70s and later won a prize. The idea is that people are sometimes rational but not always.
The prospect theory of human behavior was founded by Daniel Kahneman. Richard Saker is the photographer.
In June of last year, the group described how they trained a neural net on a huge amount of decisions people took in risky scenarios, then compared how accurately it predicted future decisions with respect to prospect theory. The neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.
The counter-examples revealed more of the complexity that exists in real life. Humans are constantly weighing up probabilities based on incoming information. When there are too many competing probabilities for the brain to compute, they might switch to a different strategy, which is guided by a rule of thumb, and a stockbroker's rule of thumb might not be the same as that of a teenagebitcoin trader.
The machine learning system is being used to identify cases where the theory is not true. The bigger the dataset, the more it learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that are subject to certain constraints. It's difficult to describe a branching tree of rules, let alone in words.
AlphaFold will improve our understanding of life.
Janet Thornton.
What the psychologists are discovering is still explainable. As they reveal more and more complexity, it will become less so, as the theory-free predictive engines embodied by Facebook or AlphaFold are the logical culmination of that process.
Some scientists are eager for that. When Frederick Jelinek said that the performance of the speech recogniser goes up when he fires a linguist, he meant that theory was holding back progress.
Take the structures. If you want to design a drug that blocks or enhances a givenProtein's action, you need to know its structure. AlphaFold uses techniques such as X-ray crystallography to train its predictions, which are more reliable than those that don't use experimental data. Janet Thornton, the former director of the EMBL European Bioinformatics Institute near Cambridge, says that it isn't the lack of a theory that will stop drug designers from using it. She says that AlphaFold will improve our understanding of life and drugs.
The AlphaFold program is used to model the structure of a human. The image is from the EMBL-EBI.
Others are not comfortable with where science is going. Neural nets can throw up spurious correlations if they are trained on small datasets. All datasets are biased because scientists don't collect data evenly or neutrally, but always with certain assumptions in mind. The data landscape we are using is skewed according to the philosopher of science.
Dayan doesn't think the problems are impossible. He says that humans are also biased and that it is difficult to correct. It will be hard to argue that the machine is more biased than the theory if there is less reliable predictions.
Our human need to explain the world in terms of cause and effect may be a tougher obstacle to overcome. In a paper published in the journal Neuroscience, Bingni Brunton and Michael Beyeler wrote that the need for interpretability may have prevented scientists from making novel insights about the brain. They sympathized. Computational models need to yield insights that are explainable to clinicians, end- users and industry if those insights are to be translated into useful things such as drugs and devices.
The topic ofExplainable Artificial Intelligence has become a hot topic. We might be faced with a trade-off: how much predictability are we willing to give up for?
An example of an magnetic resonance image can be found in an Artificial Intelligence scientist's presentation at New York University. It takes a lot of raw data and a lot of scanning time to produce such an image, which isn't the best use of that data if you want to detect cancer. You could train an artificial intelligence to identify the smaller portion of the data that is sufficient to produce an accurate diagnosis, as well as other methods, if you wanted to. Patients and radiologists are still wedded to the image. He says that humans are more comfortable with a 2D image.
A patient is in Moscow. Valery Sharifulin is pictured.
The final objection to post-theory science is that only humans can do generalisations from examples and that there is likely to be useful old-style theory. It requires a kind of instinctive homing in on the properties of the examples that are relevant to the general rule. In order to come up with his second law,Newton had to ignore some data. He had to imagine that things were falling in a vacuum, free of air resistance.
In Nature last month, mathematician Christian Stump called this intuitive step the core of the creative process. He wrote about it because he said that for the first time, an artificial intelligence had pulled it off. DeepMind built a machine-learning program that made mathematicians think about new insights in the mathematics of knots.
There is almost no stage of the scientific process where the footprint of Artificial Intelligence hasn't been left. The more we draw it into our quest for knowledge, the more it changes. We can be reassured that we are still asking the questions, even though we will have to learn to live with that. In the 1960s, Picasso said that computers are useless. They can only give you answers.