Author: Denis Avetisyan
New research shows that simulated people powered by artificial intelligence react to misinformation much like humans do.

Agent-based modeling, driven by large language models and incorporating cognitive biases, accurately simulates human vulnerability to misinformation regardless of professional expertise.
Understanding how populations respond to misinformation is critical yet hampered by the impracticality of real-world experimentation. This paper, ‘Simulating Misinformation Vulnerabilities With Agent Personas’, introduces an agent-based simulation leveraging Large Language Models to model individual responses to deceptive content. Findings demonstrate that these LLM-generated agents accurately reflect human reactions, with cognitive biases proving more influential than professional background in interpreting information. Could this approach provide a scalable framework for analyzing trust, polarization, and susceptibility within complex information networks?
The Inevitable Decay of Truth
The proliferation of misinformation poses a substantial threat to informed discourse and societal stability. False narratives erode trust, polarize communities, and can incite harm. The speed and scale of digital dissemination overwhelm conventional fact-checking methods. Traditional approaches struggle to keep pace, while automated systems still exhibit limitations in identifying nuanced misinformation.
Understanding how individuals process information is crucial. Cognitive biases, emotional reasoning, and social networks all influence the spread of falsehoods. Individuals tend to believe and share information confirming existing beliefs, even if demonstrably false. Effective interventions require a deeper understanding of these psychological and social factors. Like all complex systems, the information ecosystem is subject to entropy – and the current rate of decay demands that we consider not simply what is false, but how falsehoods age and propagate through collective memory.

The Architecture of Belief
Individuals do not process information objectively; pre-existing mental schemas heavily influence interpretation and acceptance. These schemas act as filters, prioritizing information consistent with existing beliefs and downplaying contradictions. This introduces systematic bias in understanding the world.
Framing Theory demonstrates that the presentation of information shapes perceptions, not just its content. Equivalent information, presented differently, can elicit drastically different responses. This highlights how framing—emphasizing gains versus losses—impacts decision-making, even with constant probabilities.
Cognitive biases distort judgment, creating vulnerabilities to manipulation and reinforcing prejudices. Numerous biases, including confirmation bias, anchoring bias, and availability heuristic, operate unconsciously, leading to predictable errors in reasoning. These biases are not random, but consistent deviations from rational thought.
Simulating the Collective Mind
Agent-Based Modeling (ABM) simulates complex systems by creating a virtual population of autonomous individuals. Each agent possesses unique characteristics and cognitive biases influencing decision-making. This allows researchers to explore emergent phenomena and collective behaviors difficult to obtain through traditional methods.
Recent advances in Large Language Models (LLMs), such as LLaMA 3.1 8B Instruct and GPT-4, provide a powerful engine for generating realistic agent behaviors. These models enable more sophisticated ABM simulations, going beyond pre-programmed rules to incorporate contextual understanding and probabilistic reasoning.
A study assessed LLM-simulated agents in misinformation detection, demonstrating their ability to approximate human decision-making. Six out of eight agents consistently outperformed human annotators in identifying false information.

Predicting the Currents of Disinformation
Simulations demonstrate that Belief in Headline and Likelihood to Share are influenced by agent characteristics and network structure, including pre-existing beliefs, susceptibility to misinformation, and network connectivity. Variations in these factors lead to significant differences in information processing and dissemination.
Misinformation detection achieved over 63% accuracy, with six LLM-generated agents surpassing human annotators. Analysis revealed patterns in agreement: conspiracy-believing and susceptible agents agreed 53% of the time, while agreement between conspiracy-believing and normal agents was lower at 33%.

Understanding these dynamics offers the potential to design targeted interventions mitigating misinformation and encouraging informed decision-making. Like a chronicle slowly unfolding, the propagation of belief reveals the underlying architecture of trust and skepticism within a system.
The study reveals a fascinating truth about the propagation of misinformation: the underlying cognitive architecture of an agent—its predispositions and biases—often outweighs expertise in determining susceptibility. This echoes a principle of enduring systems: their eventual state isn’t dictated by initial design, but by the accumulation of interactions and inherent vulnerabilities. As Barbara Liskov observed, “It’s one of the really difficult things about systems—you want to be able to change them without breaking them.” The agent-based modeling detailed in this work illustrates how even seemingly rational agents, built upon large language models, exhibit predictable patterns of flawed reasoning, demonstrating that robust design must account for the inevitable ‘decay’ introduced by biased information and the limitations of any schema.
What’s Next?
The demonstrated capacity of Large Language Model agents to mimic susceptibility to misinformation does not resolve the underlying problem, merely shifts the arena. The simulations highlight the potency of cognitive bias – a predictable flaw in any system attempting to process information. To focus solely on ‘correcting’ these biases feels akin to rearranging deck chairs; the ship still sails toward entropy. The true limitation isn’t the model’s accuracy, but the tacit assumption that stability is achievable. These agent-based models, while insightful, operate within a closed system. The real world introduces exogenous shocks – novel narratives, shifting social contexts – that will inevitably reveal the brittleness of even the most robust simulated schemas.
Future work will likely center on increasing model complexity – incorporating more nuanced representations of belief, emotion, and social interaction. Yet, increasing fidelity may simply illuminate previously hidden points of failure. A more fruitful, if less comforting, path involves acknowledging the inherent temporality of information ecosystems. Systems don’t fail because of accumulated errors, but because time itself is the ultimate disruptor.
The question, then, isn’t how to prevent the spread of misinformation, but how to build systems that degrade gracefully as they are compromised. Research should explore adaptive mechanisms – methods for detecting and mitigating the effects of narrative decay – rather than striving for an impossible state of perpetual informational purity. Sometimes, stability is just a delay of disaster.
Original article: https://arxiv.org/pdf/2511.04697.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- The X-Files’ Secret Hannibal Lecter Connection Led to 1 of the Show’s Scariest Monsters Ever
- Fan project Bully Online brings multiplayer to the classic Rockstar game
- Is The White Lotus Breaking Up With Four Seasons?
- EUR TRY PREDICTION
- Elizabeth Olsen Wants to Play Scarlet Witch Opposite This MCU Star
- Dwayne ‘The Rock’ Johnson says “we’ll see” about running for President
- Dad breaks silence over viral Phillies confrontation with woman over baseball
- One Battle After Another Is Our New Oscar Front-runner
- Yakuza: Like a Dragon joins the PlayStation Plus Game Catalog next week on October 21
- APT PREDICTION. APT cryptocurrency
2025-11-10 21:07