Your Brain Is an Energy-Efficient 'Prediction Machine'

Our brain is a three pound mass of tissue encased within a skull. The brain can't simply be assembling sensory information, as though it were putting together a jigsaw puzzle, to perceive its surroundings, according to abundant evidence and decades of research. The brain can make a scene based on the light entering our eyes even when the incoming information is not clear.

The brain is seen as a prediction machine by many neuroscientists. The brain uses its prior knowledge of the world to make predictions about the causes of incoming sensory information. Those hypotheses give rise to perception in our mind. The more ambiguous the input, the more reliant on prior knowledge is.

The ability of the framework to explain a lot of different phenomena in many different systems is a beauty.

The growing evidence for this idea is mostly circumstantial and open to alternative explanations. There is a lot of evidence for cognitive neuroscience and neuroscience in humans.

Computational models are being used to understand and test the idea of the predictive brain. Computational neuroscientists have built artificial neural networks that learn to make predictions about incoming information. These models seem to mimic real brains. Experiments with these models show that brains had to evolve to satisfy energy constraints.

Computational models have made neuroscientists believe that brains learn to infer the causes of sensory inputs. The details of how the brain works are hazy, but the broad brushstrokes are becoming clearer.

Unconscious biases in perception.

It seems like a counterintuitively complex mechanism for perception, but there is a long history of scientists turning to it because of other explanations. Hasan Ibn Al-Haytham, a Muslim astronomer and mathematician, highlighted a form of it in his book a thousand years ago. In the 1860s, the idea of the brain's perception of the external causes of its inputs gathered force.

The concept of unconscious inference was expounded by Helmholtz to explain bi-stable or multi-stable perception, in which an image can be perceived in more than one way. Our perception keeps flipping between the two animal images, for example, with the ambiguous image that we can perceive as a duck or a rabbit. The perception must be an outcome of an unconscious process of top-down inferences about the causes of sensory data since the image that forms on the retina doesn't change.

Cognitive psychologists continued to argue that perception was a process of active construction that drew on both bottom-up sensory and top-down conceptual inputs during the 20th century. The effort culminated in an influential 1980 paper, "Perceptions as Hypotheses," by the late Richard Langton Gregory, which argued that perceptual illusions are essentially the brain's incorrect guesses about the causes of sensory impressions. Computer vision scientists tried to use bottom-up reconstruction to enable computers to see without an internal model for reference.