Author: Denis Avetisyan
A new study reveals how the Truth Social platform amplified unverified claims during the 2024 election, contributing to a volatile information environment.

Network analysis demonstrates that Truth Social served as a key propagation channel for election rumors, with Donald Trump identified as a central figure in the platform’s rumor ecosystem.
Despite growing concerns about online misinformation, quantifying the dynamics of rumor propagation in ideologically-aligned social networks remains a significant challenge. This study, ‘Beyond the “Truth”: Investigating Election Rumors on Truth Social During the 2024 Election’, leverages large language models to analyze a novel dataset of election rumors on the alt-tech platform Truth Social, revealing that repeated exposure substantially increases the likelihood of sharing false claims. Our findings identify Donald Trump as a central figure in the platform’s rumor ecosystem and demonstrate rapid contagion effects within homogenous networks. How can we better understand-and potentially mitigate-the spread of misinformation in these increasingly influential online spaces?
The Rising Tide: Mapping the Erosion of Truth
The current election cycle is witnessing an unprecedented rise in unsubstantiated claims and misleading narratives circulating across social media platforms, directly impacting public trust in the democratic process. A comprehensive analysis of 15 million posts originating from approximately 200,000 users reveals the sheer magnitude of this challenge. This data demonstrates a significant amplification of ‘election rumors’-false or misleading information related to voting procedures, candidate integrity, or election outcomes-suggesting a coordinated effort to sow doubt and confusion. The proliferation of these narratives, often spread through echo chambers and algorithmic amplification, poses a critical threat to informed civic engagement and the integrity of the upcoming election. This extensive dataset provides a crucial baseline for understanding the scope and characteristics of online election misinformation, enabling targeted interventions and proactive strategies to mitigate its harmful effects.
The rapid dissemination of election rumors presents a significant challenge to conventional misinformation countermeasures. An analysis of over 15 million posts revealed that approximately 0.67% were classified as containing election-related falsehoods – a figure that, while seemingly small, underscores the sheer pervasiveness of the issue when scaled across the vast digital landscape. Existing fact-checking initiatives and content moderation strategies are increasingly strained by the volume and velocity of these campaigns, struggling to effectively debunk false narratives before they gain traction and influence public opinion. This lag between the spread of misinformation and its correction creates a fertile ground for distrust and erodes faith in the electoral process, demanding innovative approaches to detection and intervention that can match the speed and scale of online disinformation efforts.
The emergence of platforms like Truth Social introduces complexities to the fight against election misinformation that demand specific investigation. Unlike more established social networks, Truth Social’s user base exhibits a heightened propensity for sharing unverified claims and conspiracy theories, amplified by a content moderation approach that prioritizes free speech, even when demonstrably false. This environment fosters the rapid dissemination of election rumors, creating echo chambers where unsubstantiated narratives gain traction and influence. Consequently, standard misinformation detection techniques, often reliant on flagging content based on keywords or source reliability, prove less effective. A focused analytical strategy, accounting for the platform’s unique algorithmic structure and user demographics, is crucial to understanding the spread of false information and mitigating its potential impact on public trust and electoral processes.

The Rumor Detection Agent: A Tiered Defense
The Rumor Detection Agent is a tiered system implemented to address misinformation on the Truth Social platform. This architecture is designed to efficiently identify potentially false claims by employing multiple stages of analysis, rather than relying on a single, computationally expensive process. Initial stages focus on broad filtering to reduce the volume of content requiring detailed investigation. Subsequent stages then apply more refined techniques to verify claims and categorize them, allowing for a scalable and cost-effective approach to misinformation detection. This multi-stage design allows the system to prioritize content for deeper analysis, improving overall efficiency and responsiveness.
The Rumor Detection Agent utilizes RoBERTa, a robust pre-trained transformer model, as its primary filtering mechanism. RoBERTa is applied to incoming content from Truth Social to rapidly assess and discard posts with a low probability of containing misinformation, thereby significantly decreasing the workload for subsequent, more computationally expensive analysis stages. This initial filtering process focuses on identifying linguistic patterns and contextual cues indicative of potentially false claims, allowing the system to prioritize content requiring detailed verification. The model’s pre-trained nature enables efficient processing without requiring extensive fine-tuning for the specific characteristics of Truth Social data.
Following initial filtering, Large Language Models (LLMs) are utilized to assess the veracity of claims identified within Truth Social content. These LLMs categorize claims based on established guidelines from the Cybersecurity and Infrastructure Security Agency (CISA) Rumor Categories, which provide a standardized framework for misinformation classification. This two-stage approach, combining initial RoBERTa filtering with subsequent LLM analysis, demonstrably reduces computational expense; specifically, LLM processing costs are reduced by 93% compared to analyzing all content directly with the LLM.

Mapping the Cascade: Influence and Propagation
Analysis of platform data demonstrates a strong correlation between user connectivity and the propagation of false claims. Individuals with a significantly higher number of connections – identified through network analysis of follower/following relationships and interaction frequency – consistently exhibited a disproportionate influence on rumor dissemination. These highly connected users, acting as central nodes in the information network, amplified false claims to a larger audience, resulting in increased visibility and accelerated spread. Quantitatively, users in the 90th percentile for network connections were, on average, 3.5 times more likely to share unverified information compared to users in the 10th percentile. This suggests that interventions targeting highly influential users may be particularly effective in mitigating the spread of false information.
Information cascades within the platform manifest as a pattern of user behavior where individuals base their acceptance and dissemination of claims on the observed actions of preceding users, rather than independent evaluation of the claim’s accuracy. This process occurs even when the initial users have limited or no supporting evidence. The observed data indicates a significant proportion of shares originated from users who had only interacted with the claim through other shares, creating a reinforcing cycle. This propagation is not necessarily indicative of widespread belief, but rather a response to perceived social consensus, amplifying both accurate and inaccurate information with similar efficiency.
Analysis of platform data demonstrates a statistically significant correlation between rumor repetition and perceived credibility, a phenomenon consistent with the ‘Illusory Truth Effect’. Users exposed to a claim multiple times, regardless of its factual basis, consistently rated it as more believable than claims encountered only once. This effect was particularly pronounced within established user groupings, or ‘echo chambers’, where repeated sharing and reinforcement amplified the perceived validity of false or misleading information. The observed increase in credibility was not dependent on source authority, indicating that sheer repetition is a primary driver of belief within the platform’s information environment.
Geographic analysis of rumor propagation revealed the presence of localized activity clusters across the platform. Statistical analysis demonstrated a correlation coefficient of 0.34 between the rate of rumor sharing within a state and that state’s election margin. This suggests a relationship between political polarization and the susceptibility of a population to unverified claims, though it does not establish causation. Further investigation is required to determine the factors driving this correlation and to assess whether rumor dissemination played a role in influencing electoral outcomes. Data was aggregated at the state level to account for variations in population density and platform usage.

Simulating the System: Modeling the Spread
A novel network simulation was developed to investigate the dynamics of rumor propagation on the Truth Social platform. This computational model replicates the social network’s structure and user interactions, allowing researchers to observe how information – and misinformation – spreads. Crucially, the simulation incorporates a parameter known as the ‘Exposure Threshold,’ representing the number of times a user must encounter a claim before accepting it as true and potentially sharing it further. By adjusting this threshold, alongside other variables such as user influence and network connectivity, the model can realistically mimic the conditions that facilitate or hinder the viral spread of rumors, providing a controlled environment for testing intervention strategies and understanding the underlying mechanisms of online misinformation campaigns.
The network simulation facilitated a systematic investigation into the efficacy of various interventions aimed at curbing the spread of misinformation. Researchers modeled the impact of strategies like fact-checking, which attempts to correct false claims after they have circulated, and content moderation, involving the removal or flagging of problematic posts. The simulation revealed that the timing of these interventions is critical; reactive measures, while helpful, often struggle to contain rapidly propagating rumors. However, the model also highlighted the potential of proactive approaches, specifically ‘pre-bunking’, where debunking information is disseminated before false claims gain traction. This anticipatory strategy proved significantly more effective at limiting rumor spread, demonstrating that addressing misinformation at its source-or even preemptively-can substantially reduce its overall influence within the social network.
The simulation revealed a substantial benefit to preemptively addressing misinformation through a strategy termed ‘Pre-bunking’. Rather than reacting to false claims after they gain traction, this approach focuses on proactively debunking them before widespread dissemination occurs. Results indicate that strategically timed Pre-bunking interventions can significantly curtail the reach of rumors, effectively inoculating the network against their influence. The model demonstrated that even a limited application of Pre-bunking-addressing a fraction of initial false claims-yielded a disproportionately large reduction in overall rumor propagation, suggesting that early intervention is far more effective than attempting to control misinformation once it has already begun to spread virally through the social network.
Analysis of rumor propagation on Truth Social revealed a stark power dynamic in information dissemination. Posts originating from highly influential figures, specifically Donald Trump, demonstrated a disproportionately large impact, accumulating a cumulative rumor influence score of 208.2M. This figure nearly doubles the score of the platform’s second most influential user, highlighting how established authority can rapidly amplify unverified claims. The study indicates that content shared by these key individuals bypasses typical filtering mechanisms, achieving broader reach and significantly contributing to the overall spread of misinformation, regardless of factual accuracy. This suggests that interventions targeting influential accounts may be particularly effective – or conversely, that these accounts require especially diligent monitoring – to mitigate the impact of false narratives.

The study of Truth Social reveals a landscape far removed from simple information exchange; it’s an ecosystem where narratives, even unfounded ones, take root and flourish with alarming speed. Repeated exposure, as the research demonstrates, doesn’t illuminate truth-it cultivates belief. This echoes a fundamental principle of complex systems: growth, not construction, defines their evolution. As Barbara Liskov once observed, “It’s one of the great failures of the computer industry that we still have these incredibly complex systems that are built by individuals.” This complexity, mirrored in the network of rumor propagation on Truth Social, isn’t a bug-it’s a feature, and one that demands understanding beyond mere technical solutions. The platform isn’t merely broadcasting falsehoods; it’s actively growing them.
The Seeds We Sow
This examination of Truth Social’s rumor ecosystem offers a glimpse, not an ending. The platform, as revealed, does not merely host falsehoods; it cultivates them. The amplification observed isn’t a glitch in the system, but its intended function. Every share, every repost, is a drop of water to a seed already sown. Attempts to ‘detect’ these rumors, to isolate them for correction, feel increasingly like trying to hold back the tide with a sieve. The architecture isn’t broken; it’s fulfilling its prophecy.
Future work will undoubtedly refine the models for rumor identification, seeking ever-greater precision. But the true challenge lies not in identifying the symptoms, but in understanding the underlying conditions that allow them to flourish. The centrality of a single actor within this network suggests a need to move beyond analyses of content, and toward investigations of influence – of the subtle cues and signals that shape belief.
One suspects that any intervention will be met with adaptation, with the ecosystem simply evolving around the constraints imposed upon it. The system doesn’t yield to correction; it learns from it. The focus, therefore, must shift from control to comprehension – from attempting to halt the flow, to understanding where the river is headed, and what lies at its source.
Original article: https://arxiv.org/pdf/2601.04631.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Tom Cruise? Harrison Ford? People Are Arguing About Which Actor Had The Best 7-Year Run, And I Can’t Decide Who’s Right
- Gold Rate Forecast
- Abiotic Factor Update: Hotfix 1.2.0.23023 Brings Big Changes
- Adam Sandler Reveals What Would Have Happened If He Hadn’t Become a Comedian
- Brent Oil Forecast
- What If Karlach Had a Miss Piggy Meltdown?
- Katanire’s Yae Miko Cosplay: Genshin Impact Masterpiece
- How to Complete the Behemoth Guardian Project in Infinity Nikki
- Silver Rate Forecast
- Yakuza Kiwami 2 Nintendo Switch 2 review
2026-01-09 19:08