Author: Denis Avetisyan
New research shows artificial intelligence can accurately gauge public perceptions of support for climate initiatives across the world, offering a powerful tool for understanding and addressing perception gaps.
Large language models successfully predict global variations in beliefs about public willingness to contribute to climate change mitigation.
Despite widespread public support for climate action, a persistent underestimation of others’ willingness to contribute hinders meaningful progress. This research, presented in ‘Large language models accurately predict public perceptions of support for climate action worldwide’, investigates whether large language models (LLMs) can reliably assess these perception gaps globally. Findings demonstrate that LLMs, particularly Claude, accurately predict public perceptions of financial contributions to climate initiatives – comparable to statistical models and revealing a systematic downward bias rooted in social projection. Could this scalable, rapid assessment tool complement costly surveys and unlock more effective strategies for mobilizing climate action worldwide?
The Illusion of Agreement: Why We Think Everyone Else Is Like Us
Broad public engagement is essential for tackling climate change, yet a significant disparity often exists between stated willingness to participate in climate action and the actual levels of involvement observed. Research indicates that while many individuals express support for initiatives like reducing carbon footprints or adopting sustainable practices, this positive sentiment doesn’t consistently translate into behavioral change. This disconnect isn’t simply a matter of hypocrisy; it’s frequently rooted in inaccurate perceptions of societal norms and the extent to which others are also taking action. Consequently, the expectation that ‘everyone else’ isn’t doing their part can discourage individual efforts, creating a barrier to the collective action necessary for meaningful progress on climate goals. Understanding the drivers of this misalignment between stated intent and actual behavior is therefore crucial for designing effective strategies to mobilize widespread climate action support.
Research indicates a significant disparity between what people say they believe and what they believe others believe regarding climate action – a phenomenon known as the ‘Perception Gap’. Individuals consistently express personal willingness to engage in pro-environmental behaviors – their ‘First-Order Beliefs’ – yet simultaneously underestimate the commitment of their peers, forming inaccurate ‘Second-Order Beliefs’. This isn’t simply a matter of misjudgment; studies suggest people systematically assume lower levels of pro-environmental intent in others than they report for themselves. The consequence is a distorted social reality where perceived lack of collective support can discourage individual action, even among those genuinely committed to addressing climate change. Understanding this gap is crucial because individuals are heavily influenced by their perceptions of social norms and are more likely to act if they believe their efforts will be part of a broader, collective movement.
The principles of conditional cooperation suggest that contributions to collective efforts, such as climate action, are heavily influenced by expectations of others’ behavior. Research indicates individuals are far more inclined to participate when they believe their efforts will be reciprocated by the group; however, a widespread expectation that others will not contribute significantly diminishes personal willingness to act. This dynamic creates a self-defeating cycle, where pessimistic beliefs about collective participation actively reduce the likelihood of achieving the very cooperation needed to address shared challenges. Consequently, even individuals genuinely concerned about climate change may withhold support if they perceive a lack of commitment from peers, highlighting how perceptions of collective willingness – or the lack thereof – can be a crucial barrier to effective climate action.
Determining the extent of the perception gap – the difference between what individuals believe they will do to address climate change and what they believe others will do – is paramount to fostering broader climate action. Researchers are developing increasingly sophisticated methods, from large-scale surveys to behavioral experiments, to not only quantify this gap but also to predict how it will influence participation in initiatives like adopting renewable energy or supporting climate-friendly policies. Understanding these predictive patterns allows for the design of targeted interventions, such as communication strategies that accurately reflect the level of public commitment or highlight the actions of early adopters, thereby encouraging conditional cooperation. By accurately gauging and addressing these misperceptions, it becomes possible to move beyond simply identifying willingness and towards actively bolstering genuine, collective support for climate solutions.
Can Machines See What We Can’t? Modeling the Illusion of Consensus
Large Language Models (LLMs) are being investigated for their capacity to forecast ‘Perception Gaps’ – the discrepancies between a country’s actual state and its citizens’ beliefs about the states of other countries. This approach diverges from conventional public sentiment analysis by leveraging the LLM’s ability to process and extrapolate from textual data representing diverse national contexts. The models analyze information pertaining to each country to predict how its population perceives the beliefs, values, or conditions in other nations, thereby quantifying potential misalignments in understanding. This methodology represents a novel application of LLMs, moving beyond simple sentiment detection to the prediction of inter-country cognitive differences, with potential applications in fields requiring cross-cultural understanding and communication.
Large Language Models employed in perception gap prediction utilize a two-pronged input strategy. First, Country-Level Indicators-such as Gross Domestic Product, internet penetration rates, levels of education, and political stability metrics-provide contextual data regarding a nation’s socioeconomic and technological landscape. Second, First-Order Beliefs, representing individual perceptions and stated opinions on specific topics, are incorporated as primary data points. The models then process these combined inputs to estimate the beliefs that individuals within one country hold regarding the perceptions and opinions of individuals in other countries, effectively modeling cross-national perception.
Traditional statistical regressions often rely on pre-defined relationships and linear assumptions to model perceptions, which can limit their accuracy when dealing with the non-linear and multifaceted nature of human beliefs. These methods typically require researchers to explicitly specify all relevant variables and their functional forms, potentially overlooking subtle contextual factors or interactions that significantly influence how individuals perceive the beliefs of others. In contrast, Large Language Models (LLMs) can implicitly learn these complex relationships from large volumes of text data, allowing them to capture nuanced patterns and dependencies that might be missed by conventional regression techniques. This capability is particularly relevant when modeling ‘Perception Gaps’, as these are shaped by a confluence of socio-economic indicators, cultural contexts, and individual cognitive biases – elements that are difficult to quantify and incorporate into traditional statistical frameworks.
Accurate forecasting of perception gaps – the discrepancies between a country’s beliefs and its perceptions of other countries’ beliefs regarding climate change – enables the identification of regions susceptible to targeted communication strategies. This approach allows for the prioritization of areas where misperceptions may hinder engagement with climate action initiatives. By understanding where beliefs diverge from reality, messaging can be tailored to correct inaccuracies and foster a more accurate understanding of global consensus. This, in turn, aims to increase public support for, and participation in, climate mitigation and adaptation efforts, ultimately maximizing the impact of climate-related policies and programs.
Testing the Machine: How Well Do LLMs Predict the Unpredictable?
The evaluation encompassed four large language models – GPT-4o mini, Claude 3.5 Haiku, Gemini 2.5 Flash, and Llama 4 Maverick – to assess their capacity for predicting ‘Perception Gaps’. These gaps, defined as discrepancies between objective reality and public perception, were quantified using data sourced from the Gallup World Poll. The models were tasked with forecasting these gaps based on available input features, allowing for a comparative analysis of their predictive capabilities. The selection of these specific LLMs reflects their current status as state-of-the-art in natural language processing and their potential for scalable perception analysis.
Model performance in predicting ‘Perception Gaps’ was quantitatively evaluated using both Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) metrics. Predictions generated by each Large Language Model (LLM) were compared against established data from the Gallup World Poll, serving as the ground truth. RMSE provides a measure of the standard deviation of the residuals – the differences between predicted and observed values – and is sensitive to outliers. MAE, conversely, calculates the average of the absolute differences, offering a more robust measure less affected by extreme values. Utilizing both metrics provided a comprehensive assessment of the LLMs’ predictive accuracy and reliability when compared to the statistically validated Gallup data.
Evaluation of Large Language Models (LLMs) demonstrated predictive capability regarding perception gaps, as quantified by both Mean Absolute Error (MAE) and Pearson correlation. Specifically, the Claude 3.5 Haiku model achieved an MAE of 4.60 percentage points when predicting these gaps, indicating an average deviation of 4.60 p.p. between predicted and observed values. Furthermore, a Pearson correlation coefficient of r = 0.77 was observed, signifying a strong positive linear relationship between the LLM’s predictions and the perception gap data obtained from the Gallup World Poll. These metrics collectively suggest a substantial degree of accuracy in LLM-based prediction of public perception discrepancies.
Large Language Models (LLMs) present a viable alternative to traditional survey methods for assessing global public perception due to their scalability and cost-effectiveness. Evaluations using the Gallup World Poll data demonstrate that LLM-based predictions achieve performance comparable to Ordinary Least Squares (OLS) regression models; specifically, LLMs recorded a Root Mean Squared Error (RMSE) of 4.79 p.p., aligning with the RMSE observed in OLS models. This indicates that LLMs can generate insights into perception gaps on a global scale without the logistical and financial burdens associated with conducting large-scale surveys, offering a potentially more efficient method for gauging public opinion.
The Echo Chamber Within: Why We Think Everyone Else Is Like Us, And What We Can Do About It
The human tendency towards social projection – assuming others share one’s own beliefs, values, and perspectives – significantly influences how individuals perceive the views of those around them. Research indicates this cognitive shortcut can create substantial ‘perception gaps’, where assessments of others’ opinions diverge considerably from reality. This isn’t simply a matter of disagreement; rather, it’s a systematic bias stemming from the unconscious inclination to project internal states onto external actors. Consequently, individuals may falsely assume widespread support for their own positions, or conversely, underestimate the prevalence of opposing viewpoints, leading to miscommunication and hindering collaborative efforts across diverse groups.
Research indicates a systematic bias in how individuals perceive public support for climate action, wherein personal commitment strongly influences estimations of others’ willingness to participate. Those deeply invested in environmental advocacy often overestimate the prevalence of similar beliefs and behaviors in the broader population, while those less engaged tend to underestimate it. This phenomenon, known as social projection, creates perception gaps that can hinder collective action; an inflated sense of existing support may reduce motivation to mobilize others, while a perceived lack of concern can discourage engagement altogether. Consequently, interventions aimed at accurately gauging public opinion – rather than relying on assumptions – are crucial for building effective campaigns and fostering a more realistic understanding of collective climate consciousness.
Targeted interventions hold promise for mitigating perception gaps stemming from social projection. Research indicates that actively challenging an individual’s assumption that others share their beliefs – particularly regarding contentious issues like climate action – can significantly improve accuracy in assessing public opinion. These interventions often involve presenting data that contradicts pre-existing biases, or facilitating perspective-taking exercises that encourage individuals to consider alternative viewpoints. The efficacy of such approaches lies in their ability to disrupt the automatic process of projecting one’s own beliefs onto others, fostering a more nuanced and realistic understanding of collective attitudes. Ultimately, by recalibrating these perceptions, interventions can unlock greater potential for collaboration and effective collective action on critical issues.
Correcting the cognitive biases inherent in social projection offers a pathway to amplified collective action and, ultimately, a more sustainable future. When individuals accurately perceive the beliefs and motivations of others – rather than assuming shared viewpoints – cooperation becomes significantly more likely. This improved understanding facilitates the formation of stronger alliances, more effective communication, and a greater willingness to compromise on shared goals, such as mitigating climate change. Interventions designed to calibrate these perceptions, by providing accurate data on public opinion or encouraging perspective-taking, can dismantle the barriers to collaboration and unlock the potential for widespread, impactful change. By fostering a more nuanced and realistic assessment of collective will, societies can mobilize resources and implement solutions with greater efficiency and purpose, accelerating progress towards a resilient and equitable future.
The study’s findings, detailing how large language models mirror human tendencies toward pluralistic ignorance – believing others are less supportive of climate action than they actually are – feels less like innovation and more like a sophisticated echo of existing problems. It’s a clever application, certainly, but one that merely automates the misreading of social cues. As Brian Kernighan once observed, “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” This research feels similar; a complex solution to a problem created by overthinking how to gauge public sentiment. It efficiently maps perception gaps, but doesn’t address the fundamental flaw: assuming anyone accurately knows what anyone else thinks.
What’s Next?
The demonstrated capacity of large language models to map the contours of ‘pluralistic ignorance’ is, predictably, already prompting discussions of scalable social sensing. The temptation to replace expensive, slow-moving survey infrastructure with LLM-driven perception audits will prove difficult to resist. Yet, the elegance of predicting belief about belief doesn’t solve the underlying problem: these models are, at their core, sophisticated echo chambers. Any predictive power is predicated on the data they’ve consumed – and everything optimized will one day be optimized back, meaning the very biases these models currently reveal will become self-fulfilling prophecies if left unchecked.
The true challenge isn’t simply identifying perception gaps, but understanding their etiology. LLMs can correlate, but correlation isn’t causality. Future work must move beyond prediction and begin to explore the mechanisms driving these misperceptions – and, crucially, how those mechanisms vary across cultures and demographics. Architecture isn’t a diagram; it’s a compromise that survived deployment, and this field will quickly discover that a perfect map of public sentiment is far less useful than a working theory of social cognition.
Ultimately, this research highlights a familiar pattern: a tool initially lauded for its ability to reflect social reality will inevitably be tasked with altering it. The question isn’t whether LLMs can accurately predict what people think, but what happens when those predictions become inputs for the very systems they’re meant to observe. The field doesn’t refactor code – it resuscitates hope, and the hope that this technology will remain a neutral observer seems, at best, optimistic.
Original article: https://arxiv.org/pdf/2601.20141.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Lacari banned on Twitch & Kick after accidentally showing explicit files on notepad
- Answer to “A Swiss tradition that bubbles and melts” in Cookie Jam. Let’s solve this riddle!
- Ragnarok X Next Generation Class Tier List (January 2026)
- 2026 Upcoming Games Release Schedule
- Adolescence’s Co-Creator Is Making A Lord Of The Flies Show. Everything We Know About The Book-To-Screen Adaptation
- Best Doctor Who Comics (October 2025)
- 15 Lost Disney Movies That Will Never Be Released
- Best Hulk Comics
- 9 TV Shows You Didn’t Know Were Based on Comic Books
- Gold Rate Forecast
2026-01-30 06:16