Skip to main content
Knowledge4Policy
Knowledge for policy
Supporting policy with scientific evidence

We mobilise people and resources to create, curate, make sense of and use knowledge to inform policymaking across Europe.

  • Blog post | Last updated: 22 Jul 2025

The crisis of informed decisions: Our brains in the epistemic architecture of digital spaces

 

Disclaimer: The views and opinions expressed in the blog articles belong solely to the
author of the content, and do not necessarily reflect the European Commission's
perspectives on the issue.

The crisis of informed decision-making in modern democracy

Democratic societies rest on the foundational principle of a well-informed citizenry. The idea is simple: when citizens have access to accurate information, they can make rational decisions aligned with their values, needs and experiences, thereby shaping policy in the public interest. In theory, the digital age—marked by unprecedented access to information—should empower citizens more than ever before, and Western democracies should be blossoming in this digital age. Yet, this does not appear to be the case.

Information often fails in its primary function: to inform. Rather than prompting people to revise their views when confronted with new evidence, individuals often process information in ways that preserve or even reinforce their preexisting beliefs, especially when facts challenge their opinion. As a result, initiatives such as fact-checking often do little to change minds, and political polarisation is on the rise.

Understanding why this happens is not an abstract concern—it is a democratic imperative. The European Union faces complex, pressing challenges such as geopolitical tensions, climate change, and social inequality, all of which demand coordinated policy responses underpinned by public support. If citizens struggle to engage meaningfully with relevant information, the legitimacy and efficacy of democratic decision-making suffers.

When information fails to inform

There is a widespread and comforting belief that simply providing people with credible information will lead them naturally to update their views, correct their misperceptions, and converge toward a shared understanding of reality. This assumption, often referred to as the 'information deficit model', underpins much of the work carried out by public communicators, scientific advisors, and researchers.

But is that always the case? Evidence suggests otherwise. While misinformation can deepen societal divisions, providing accurate information does not consistently bridge them. Although strong evidence can be persuasive, there are also cases where exposure to accurate information actually contributes to more heightened polarisation. Different groups react differently to the same information, sometimes in ways that reinforce existing beliefs.

How is this possible? 'Predictive coding', a concept in cognitive science, suggests that the brain constantly generates predictions about the world and updates them based on new information. Whether we revise our beliefs depends, for example, on how unexpected the new input is and how confident we are in what we already believe. This post explores how predictive coding helps explain why our brains sometimes resist updating beliefs and how digital information environments may interact with our cognitive mechanisms. 

After all, three-quarters of Europeans—especially young people—access news online, with platforms competing to monetise their attention. Rather than fostering a shared reality, today’s platform architectures have the potential to reinforce existing beliefs, deepen divisions, and shape perceptions of 'truth'.

The digital information architecture

The online world isn’t just a new information source. It brings with it distinct structural features that shape how content is encountered and navigated. The 'marketplace of ideas', in this context, proves to be a hollow metaphor: under the banner of promoting free speech, today’s platform architectures often do the exact opposite—distorting not just what can be said, but what can be heard. 

  • Algorithmic Personalisation: Much of what we encounter online is not the result of deliberate choice but of algorithmic curation. Recommender systems predict and deliver content tailored to predicted engagement, itself inferred from our past behaviour. This personalisation isn’t inherently problematic—indeed, it can be useful when recommending movies or restaurants. But it becomes troubling when the political content users are exposed to is filtered and ranked not by epistemic value or integrity but by engagement potential.
  • The Attention Economy: Digital environments operate within an attention economy, where platforms compete for user engagement—a dynamic that rewards content most likely to seize an individual’s attention, often favouring the sensational or emotionally charged. The result is a self-reinforcing cycle: users are drawn to whatever captivates them most, while being exposed to increasingly similar content. Critically, this engenders an epistemic imbalance.
  • Epistemic Community Effects: Online platforms enable individuals with similar, and sometimes fringe, views to connect across geographic and social boundaries. The upshot of this is that belonging to such an online community can foster and perpetuate the illusion that certain (marginal) opinions are far more mainstream than they actually are.
  • Double-Counting Bias: Users may mistakenly perceive correlated information as independent. For instance, if the same news article is retweeted multiple times by different accounts, a recipient may falsely believe they are encountering multiple, independent confirmations of the same information, thereby inflating the perceived evidential value of the information.
  • Role of Social Cues: Social media platforms use engagement metrics—likes, shares, and comments—which can distort perceptions of how widely certain views are accepted. Biased metrics can produce a perception of broad support for a belief that is, in fact, marginal—and/or obscure true support for widely held views.

These features of the digital information environment are well-documented. One might ask: So, what? After all, humans don’t passively absorb what they see online. And it’s true—we reflect, interpret, and sometimes resist. But that doesn’t mean the structure of our digital information system is neutral or benign; on the contrary, it is influential—and in ways that merit grave concern. As the next section explores, it may systematically interact with our cognition, reinforcing held beliefs and likely, over time, fueling deeper polarisation.

The brain as a prediction machine: predictive coding

At every moment, our brains are inundated with sensory input. To make sense of this flood of information, we don’t process everything from scratch. Instead, the brain draws on past experience to generate predictions about what will happen next—predictions rooted in prior beliefs. When new information arrives, these beliefs are updated (or not) based on the degree of mismatch between what was expected and what was encountered. This mismatch is known as a prediction error.

In this sense, predictive coding proposes that the brain operates much like a Bayesian inference machine. It constructs an internal model of the world by constantly generating expectations and testing them against incoming data. In this framework, perception is not a passive reception of reality but an active construction—shaped by how prior beliefs interact with new observations:

When the world neatly aligns with our expectations, we experience confirmation—our beliefs not only remain intact, but are consolidated, becoming more fervently held.

When we encounter unexpected information, the brain registers a prediction error. At this point, it must make a decision: should it revise and update the belief to better reflect reality? Or reinterpret—or dismiss—the new information so as to preserve the prior belief/model?

This decision hinges on 

(a) how reliable we perceive the newly encountered information to be, and 

(b) how confidently we hold our prior belief.

If a prior belief is held with low confidence and the fresh evidence is deemed highly reliable, we will update and shift our belief in the direction of the evidence (Figure 1A). Conversely, when the prior belief is held with high certainty, and the incoming evidence is noisy or ambiguous, the prior belief will dominate, and the new belief (or posterior belief) will remain largely unchanged (Figure 1B). 

Figure 1. In the top graph, sensory evidence carries greater precision than the prior belief, resulting in a stronger influence on the posterior. In the bottom graph, the prior belief holds greater precision, exerting a dominant influence over the posterior. (recreated from Williams 2020).

Figure 1. In the top graph, sensory evidence carries greater precision than the prior belief, resulting in a stronger influence on the posterior. In the bottom graph, the prior belief holds greater precision, exerting a dominant influence over the posterior. (recreated from Williams 2020).

Put simply: the more certain we are about what we already believe, the more rigidly we tend to cling to those beliefs. And what fuels this certainty, more often than not, is prior experience—that is, information we have previously encountered.

Critically, current digital information environments introduce a fundamental asymmetry in the content we encounter. Algorithms tend to prioritise material similar to what has previously captured our attention. This skewed exposure is highly likely to reinforce the confidence in our existing priors, making them increasingly resistant to change over time. In this way, the architecture of digital platforms helps explain why accurate information so often fails to convince.

Consider how this asymmetry plays out in the context of the topic of climate change, for instance. If a user has previously interacted with content expressing skepticism or denial, the algorithm learns to prioritise similar material, gradually saturating their feed with reinforcing signals. Over time, this repeated exposure to such content solidifies the belief, making contradictory information seem less plausible, or even suspect. In this way, this system not only reflects but vigorously strengthens the user’s original belief.

But it is not only algorithms in the attention economy that impact belief updating. Other structural features of the digital information environment—such as epistemic community effects, double-counting bias, and the signaling power of social cues—also influence how or whether we update our beliefs. 

For example, being virtually surrounded by like-minded individuals can confer a sense of legitimacy on even highly questionable beliefs. This social reinforcement inflates our confidence in those (prior) beliefs, making them more resistant to revision when confronted with opposing, fact-based evidence.

Over time, these features of the digital environment may come to systematically determine which information we take seriously—and which we discount.

Active Inference

Our inflated priors don’t just influence how we interpret information—they also shape how we engage actively with the world around us. According to an influential framework known as Active Inference, agents don’t merely update beliefs based on sensory input; they also act on the world in ways that confirm their expectations. In other words, we don’t passively wait for the world to catch us unawares—we actively seek out and select environments that diminish the uncertainty of our internal models, those mental maps we use to make sense of reality.

Perception and action, in this framework, are two sides of the same coin: perception adjusts our internal models to fit external input, while action changes the environment to align with those models (see Figure 2).

Both perception and action serve to reduce the discrepancy between predictions and observations. Perception does so by modifying beliefs; action does so by reshaping the environment, bringing observations closer to what is predicted. (adapted from Parr et al., 2022).

Figure 2. Both perception and action serve to reduce the discrepancy between predictions and observations. Perception does so by modifying beliefs; action does so by reshaping the environment, bringing observations closer to what is predicted. (adapted from Parr et al., 2022).

For the most part, humans are certainty-seekers. We strive to make sense of the world around us because unresolved uncertainty tends to be a restless, unpleasant state to be in. And there are good reasons our brains work this way. 

If we were to update our internal models indiscriminately, treating all inputs as equally valid, we would struggle to distinguish meaningful patterns in the world from noise. Our predictive systems would falter, and in doing so, would fragment our sense of reality, undermining our ability to act purposefully. 

Seeking out certainty is thus not an inherent flaw; ‘it’s not a bug, it’s a feature’ one might say. However, when this drive for certainty plays out on the epistemically unbalanced terrain of digital environments, it amplifies a risk: our beliefs may become increasingly rigid—even radicalised.

A recent study illustrates this effect: negative content about out-groups was found to boost user engagement nearly tenfold compared to negativity about in-groups. This can be understood as a manifestation of Active Inference in action—our engagement with content reflects our attempts to minimise prediction error by selecting environments (and narratives) that fit our internal expectations. The critical point, however, is that it is not only usthe user, who does the content curation. Our preference for negative news about outgroups is learned by algorithms, which in turn serve up the same content to us over and over, artificially amplifying the effects of our certainty-seeking tendencies.

Yet there remains a vital element that we’ve not addressed so far: whether we choose to reject certain types of content—or, conversely, allow them to shape our internal models—depends on a further component: motivation.

Motivation

Epistemic motivation—the underlying needs that drive information-seeking—has been integrated into the Active Inference framework to explain how motivational forces shape the process of belief updating.

Motivation plays a decisive role in the confidence we assign to prior beliefs. For example, someone with a deep investment in a political worldview may have a strong need for specific certainty—focused on a desired conclusion—and be motivated to dismiss disconfirming information. Accepting disconfirming information would challenge their internal coherence, and potentially destabilise their sense of self. They might even intensify their prior beliefs, adopting more extreme positions in reaction to counter-evidence, as a way to avoid epistemic discomfort. 

On the other hand, someone with a strong need for nonspecific certainty—driven by a motivation to arrive at the correct answer—may be more willing to update their beliefs when presented with compelling evidence, regardless of whether it aligns with their worldview. In both cases, belief updating becomes entangled with psychological needs. This entanglement has led Kruglanski and colleagues to a striking conclusion: 'All thinking is wishful thinking'.

Rethinking 'informed' decision-making—when algorithms threaten epistemic justice 

If all thinking is wishful thinking, what happens when the most powerful information systems of our time are designed to cater to our wishes? 

When the scope of what we can believe is quietly pre-structured by digital systems, the very concept of an 'informed decision' becomes profoundly compromised. In this context, what we call 'informed' may, in fact, reflect an algorithmically curated certainty—a narrowing of epistemic possibilities rather than an expansion of knowledge or understanding.

Can we still assume that a well-informed public is collectively reasoning from a shared reality if exposure to information is driven primarily by engagement-maximising algorithms rather than epistemic value? What role should institutions, platforms, and individuals play in fostering an environment where truly informed decision-making is possible among the citizenry? Addressing these questions requires acknowledging the interplay of cognitive, motivational, and structural particularities that shape knowledge uptake. 

Outlook

Belief formation is shaped not only by how we process information, but also by the pre-filtering architectures of the platforms that deliver it. Today’s digital information systems amplify the effects of our motivation to seek confirmatory content—not by creating that motivation, but by feeding it—making belief rigidity and polarisation expected outcomes. Reversing this worrying trend would require cultivating the opposite: a motivation toward nonspecific certainty—a willingness to consider alternative views.

Importantly, the structure of our digital information environment did not arise via the laws of nature. It is the product of deliberate design choices made by humans. And we can make different ones. Insights from cognitive science should inform and be actively integrated into efforts to shape policies that govern digital spaces. Legislation such as the Digital Services Act and the AI Act presents critical opportunities to move beyond engagement maximisation and toward epistemically responsible design.

At the same time, the digital age offers remarkable potential. Never before in human history has access to virtually limitless information been so immediate and prevalent. Online platforms invite active user participation, enabling real-time knowledge sharing and public discourse on a truly unprecedented scale. If thoughtfully designed, recommender systems could expose users to a broader range of relevant and diverse perspectives rather than trapping them in epistemic feedback loops. This potential, however, is not self-activating. It is up to us—through thoughtful research, policy and platform design—to ensure that the digital age expands rather than constrains our capacity for genuinely informed decision-making.