How do you perceive the world around you? How do you see the shapes on the screen or page right now, and further recognize them as words when reading this article? 

Initially, you might expect that perception occurs in the following steps: your sensory organs, like your eyes and ears, start by taking in some information about the world. The brain then uses that information to create your internal, cohesive image of the world  —  your conscious experience. This is what you perceive daily as you go through life, known as a bottom-up approach.

However, a particular theory of cognition known as predictive processing put forth by neuroscientist Karl Friston in 2010 postulates a top-down approach to perception, and the theory claims something that may seem quite counter-intuitive.

According to the predictive processing framework of perception and cognition, your perception of the world occurs more like this: the brain continuously generates internal models, or best guesses, about what the external world should look like at any given moment based on past experiences and stored knowledge. 

It then compares these predictions with incoming sensory data. When there’s a mismatch between prediction and actual sensory input  —  a prediction error —  the brain updates its models to reduce this error. In this way, what you perceive at any moment is not a direct representation of the outside world but rather the brain’s interpretation or hypothesis of that world, constantly refined by prediction errors. 

This ongoing, dynamic process means that your perception is essentially a constructive act, shaped by both the external sensory inputs and the internal predictions of your brain, leading to a subjective experience of reality that is both unique and adaptable.

Predicting the future

The concept of a predictive brain, while relatively new in neuroscience, was anticipated a millennium ago by medieval mathematician and physicist Ibn al Haytham. According to the 1985 translation of the Book of Optics of Ibn al-Haytham, al-Haytham claimed that “many visible properties are perceived by judgment and inference.”

German physician Hermann von Helmholtz first formalized the ideas that led to the modern predictive processing brain theory in the 1860s, and argued that the brain is a hypothesis tester. According to this theory, the brain develops predictions about what the world may be like based on past information that we’ve accrued through our experience of the world  —  known as priors. The brain then tests these predictions against the incoming sensory information using unconscious perceptual inferential processes.

Neuroscientists model this process through a modified version of Bayes’ theorem, a mathematical formula for calculating conditional probabilities. In this model, the likelihood of a particular hypothesis given the sensory data we perceive is a product of two independent probabilities: the likelihood of the hypothesis occurring regardless of any sensory input, and the likelihood that this hypothesis can explain our sensory input and evidence.

For example, when crossing a street in an urban area you might hear a roaring engine and witness a blinding light. To figure out the most likely perception, your brain will compare the results of various hypotheses generated by your internal model such as (a) a car coming to a halt or (b) a massive lion with a headlamp chasing you down.

Whichever of these your brain predicts to be most likely is what you perceive.

However, neuroscientists are still unsure as to how our brains calculate such complex probabilities unconsciously and consistently. Our neurons clearly aren’t sitting with a blackboard, calculating each probability. There is some inherent unknown property about how neurons are organized and communicate, enabling them to make complex, accurate predictions.

A significant advantage of this predictive explanation of perception is that it can provide a reason for the specific organization and connections in the brain. 

Take the visual cortex, for example. In this region, information flows through a series of processing levels, from basic feature detection — such as detecting edges and colours in the primary visual cortex — to integrating these features into more complex shapes and objects. According to predictive processing, each level of this hierarchy is involved in generating predictions and sending them down to lower levels in the hierarchy, while simultaneously receiving prediction errors from these lower levels when reality misaligns with predictions. 

This two-way communication allows for the refinement of internal models of perception at every level, ensuring that the brain’s predictions become increasingly accurate, enabling a nuanced and detailed representation of the visual world.

When predictions go wrong

Despite the incredible power and efficiency the brain possesses, it makes quite a few mistakes. 

To develop predictions, our brains rely on the likelihood of priors and the probability that a certain hypothesis can explain our sensory inputs. But because your brain needs to carry out these calculations consistently, for just about every new input and every goal-oriented action, the brain has to take some shortcuts: shortcuts that magicians and illusionists become experts at exploiting.

But these incorrect predictions have implications that lie far beyond falling for mere illusions and tricks. Understanding the role of the predictive mind and its neurobiological basis could be the key to understanding neurodivergent cognition and developmental disorders.

Schizophrenia and the hallucinations and delusions associated with the disorder provide a case where predictive processing goes wrong. The predictive model suggests that many of the disorder’s characteristic symptoms may occur due to the brain’s failure to appropriately weigh prediction errors. Instead of correcting the flawed predictions and feeling surprised, the damaged feedback loops in a schizophrenic brain may give inappropriate weight to an incorrect hypothesis, leading to a reality disconnected from the external world.

For instance, vivid hallucinations could be seen as the brain’s predictions running without sufficient checks from sensory inputs or prior beliefs, creating perceptions without corresponding external stimuli. And since these internal representations become codified when the brain uses them to adjust the weights assigned to other priors, it stands to reason that engaging with hallucinations will only strengthen them, something seen in patients suffering from schizophrenia. Delusions might then arise from the brain’s attempts to make sense of these faulty predictions, constructing elaborate and seemingly impossible explanations for the misperceived reality.

In the case of autism spectrum disorder (ASD) meanwhile, difficulties with predictive processing might manifest in a different but equally impactful manner, as proposed by Reshanne Reeder, Giovanni Sala, and Tessa van Leeuwen in a 2024 study. Where people with schizophrenia deal with under-inhibited predictions, people with ASD might rely on precise, detail-focused predictions at the expense of more generalized, contextual predictions. This could lead to sensory overload, as the brain struggles to filter out irrelevant sensory information, and difficulties with social interactions and communication, where predictive processing is crucial for interpreting subtle cues and intentions. 

The intense focus on details might also explain the preference for predictable routines people with ASD tend to exhibit, as familiar environments and activities minimize the likelihood of unpredictable, overwhelming sensory input.

Understanding these conditions through the lens of predictive processing may help us gain insight into their underlying mechanisms and open up new avenues for interventions aimed at recalibrating the brain’s predictive models.

In the process of computationally modelling the mind, predictive processing has emerged as a transformative lens through which we understand cognition. By exploring how our brains anticipate and interpret the world around us, this theory might offer a key to unlocking just how humans experience the world.