How false Beliefs Spread Online: A New Simulation Approach

Author: Denis Avetisyan


Researchers are using AI-powered agents to model the complex dynamics of misinformation as it travels through social networks.

Misinformation spreads as agents propagate it, a process explored through generative models like Google AI Studio’s Nano Banana, highlighting the dynamics of information diffusion.
Misinformation spreads as agents propagate it, a process explored through generative models like Google AI Studio’s Nano Banana, highlighting the dynamics of information diffusion.

This work introduces a framework leveraging persona-driven Large Language Models and a question-answering auditor to simulate misinformation propagation, revealing the impact of agent biases and network topology on factual accuracy.

Despite growing awareness of online misinformation, predicting its spread remains challenging due to the complex interplay of cognitive biases and social dynamics. This paper, ‘Simulating Misinformation Propagation in Social Networks using Large Language Models’, introduces a novel framework employing persona-driven large language models as agents and a question-answering auditor to model and analyze how misinformation evolves within social networks. Experiments reveal that agent biases – particularly those rooted in identity and ideology – accelerate factual degradation, while expert personas promote stability, suggesting a critical link between user characteristics and information fidelity. Can this approach provide actionable insights for mitigating the spread of misinformation in increasingly complex digital ecosystems?


The Illusion of Control: Modeling Misinformation’s Spread

The proliferation of misinformation poses a substantial challenge to both individual judgment and public trust. Contemporary information ecosystems are defined not simply by data volume, but by the speed and complexity of narrative circulation. Traditional approaches to studying information diffusion struggle to capture the interplay between cognitive factors and network topology. Simple models, assuming rational actors or uniform influence, often fail to predict real-world belief formation. A key limitation is the inability to represent how pre-existing beliefs shape information interpretation and sharing. Understanding the combined effects of cognitive biases and social network structure is paramount. Ultimately, we’re not building tools to detect ‘truth’ – we’re building more elaborate ways to repackage comforting illusions.

The system architecture facilitates detailed tracing of factual drift across both homogeneous and heterogeneous information branches through persona-conditioned large language model nodes, auditor interventions, and data recording modules.
The system architecture facilitates detailed tracing of factual drift across both homogeneous and heterogeneous information branches through persona-conditioned large language model nodes, auditor interventions, and data recording modules.

Current models treat individuals as passive recipients, overlooking the active role of subjective interpretation.

Simulating the Echo Chamber: Agent-Based Information Flow

A Social Network Simulation models information diffusion, representing individuals as interconnected nodes. This allows for controlled experimentation on how information propagates under various conditions. LLM Agent personas are created through Persona Conditioning, imbuing each simulated individual with specific beliefs, biases, and communication styles. These personas actively filter, interpret, and retransmit data. The simulation utilizes Branch pathways to track information flow, analyzing how divergent narratives emerge.

Misinformation indices, calculated after each rewrite across a branched network, reveal variations in propagation rates, with the highest rates observed in the top ten branches and the lowest in the bottom ten.
Misinformation indices, calculated after each rewrite across a branched network, reveal variations in propagation rates, with the highest rates observed in the top ten branches and the lowest in the bottom ten.

Node Depth within the simulation analyzes how information propagates over multiple iterations, capturing distortion. By tracking information across layers, the model identifies key nodes influencing narrative spread.

The Futile Audit: Quantifying Factual Consistency

A QA-Based Auditor assesses factual accuracy as information propagates, performing fact-checking at each stage. This identifies deviations from established truth and quantifies misinformation. The Auditor operates on the principle that consistent factual validation is crucial for maintaining information integrity.

Analysis of misinformation propagation across heterogeneous branches, each composed of 30 nodes with randomly assigned agents, demonstrates varying propagation rates and severity levels, as reflected in branch and domain averages.
Analysis of misinformation propagation across heterogeneous branches, each composed of 30 nodes with randomly assigned agents, demonstrates varying propagation rates and severity levels, as reflected in branch and domain averages.

The Auditor calculates a Misinformation Index, providing a quantifiable measure of factual deviation. This index dynamically adjusts based on Auditor Scoring, refining the assessment through weighted criteria. Branch-wise Misinformation Propagation Rate (MPR) provides a granular view of misinformation levels within specific network pathways, revealing an average MPR of 0.64, indicating substantial factual compromise during propagation.

Domain Silos and Inevitable Decay: Insights & Mitigation

Domain-wise Misinformation Propagation Rate (MPR) enables comparative analysis of misinformation across subject areas. This reveals distinct patterns, with certain domains proving more susceptible. The simulation highlights the significant influence of Credibility Weighting on information diffusion.

A heatmap visualization of misinformation propagation rates across homogeneous branches, utilizing 21 large language model agents and 10 news domains, indicates a spectrum of misinformation severity ranging from factual errors to propaganda.
A heatmap visualization of misinformation propagation rates across homogeneous branches, utilizing 21 large language model agents and 10 news domains, indicates a spectrum of misinformation severity ranging from factual errors to propaganda.

Analysis reveals a critical role for Echo Chamber formation in amplifying misinformation. Heterogeneous branches exhibit a significantly higher average MPR (0.72) compared to homogeneous branches (0.56). Furthermore, 85% of branches propagated propaganda, and 78% of domains consistently resulted in propaganda-level misinformation. A substantial 68% of branches generated a Misinformation Index (MI) greater than 5, indicating a high prevalence of propaganda. The persistent propagation of misinformation, even in controlled simulations, suggests that novelty isn’t the problem—it’s the inevitability of decay.

The pursuit of simulating social networks with Large Language Models feels less like innovation and more like meticulously documenting the inevitable. This paper’s focus on persona-driven agents and motivated reasoning merely formalizes what anyone who’s spent five minutes online already knows: people believe what they want to believe. As Barbara Liskov once observed, “It’s one of the most difficult things about software development – deciding what abstractions are useful.” The abstractions here – ‘persona,’ ‘bias’ – are useful only insofar as they quantify the chaos. The auditor framework, attempting to verify facts, is a valiant effort, yet it feels like building a sandcastle against the tide. The system will crash, consistently, predictably. It always does. The core idea – understanding how misinformation propagates – isn’t new; the tooling is just a slightly more sophisticated way to watch the mess unfold.

What’s Next?

This exercise in simulating digital epidemics with increasingly verbose automata feels…familiar. The field chases ever-more-realistic agents, believing nuance will unlock some predictive power. Yet, the core problem remains: production social networks will always find novel ways to circumvent any model’s assumptions. The current framework, with its persona-based agents and auditor, simply adds layers of complexity to the already intractable problem of human belief. One suspects the ‘motivated reasoning’ module will require constant recalibration, chasing a moving target of emergent online behaviors.

The promise of ‘fact verification’ as a control mechanism feels particularly optimistic. History suggests that simply presenting evidence rarely alters deeply held convictions – online or otherwise. Future work will likely focus on quantifying the rate at which falsehoods are embraced, rather than imagining a scenario where they are reliably rejected. Perhaps the true metric isn’t factual fidelity, but the speed at which a network can agree on a narrative, regardless of its veracity.

Ultimately, this appears to be another elegantly constructed system destined to become tomorrow’s tech debt. The authors have built a fascinating sandbox, but one suspects the real world will offer infinitely more imaginative ways to break it. Everything new is just the old thing with worse docs.


Original article: https://arxiv.org/pdf/2511.10384.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-11-15 02:03