Beyond the Doomsday Scenario: Reframing the AI Risk Debate

Author: Denis Avetisyan


To build public support for AI governance, communication strategies must shift from focusing on distant existential threats to addressing immediate, tangible harms.

Despite increasing discourse surrounding artificial intelligence, public interest in existential risks—as measured by search trends—remains largely unaffected by key events within the AI safety community, such as the CAIS statement or the Bletchley Park summit, suggesting a disconnect between expert concerns and broader public awareness—a phenomenon only momentarily disrupted by transient, ultimately inconsequential spikes in search activity, like the one observed in mid-August 2025.
Despite increasing discourse surrounding artificial intelligence, public interest in existential risks—as measured by search trends—remains largely unaffected by key events within the AI safety community, such as the CAIS statement or the Bletchley Park summit, suggesting a disconnect between expert concerns and broader public awareness—a phenomenon only momentarily disrupted by transient, ultimately inconsequential spikes in search activity, like the one observed in mid-August 2025.

This review argues that emphasizing ‘proximate’ risks—such as job displacement and psychological impacts—is more effective at mobilizing public opinion and driving policy change than highlighting abstract, long-term dangers.

Despite the growing need for public engagement in artificial intelligence (AI) governance, alarmist communication centered on existential risks has failed to generate sustained mobilization. This paper, ‘From Catastrophic to Concrete: Reframing AI Risk Communication for Public Mobilization’, investigates the psychological barriers to accepting such narratives and demonstrates that framing AI risks in terms of proximate harms—like job displacement or impacts on mental wellbeing—significantly increases public concern and willingness to act. Through message testing across five countries, the research identifies specific demographic segments receptive to this framing and suggests that focusing on everyday concerns can effectively raise the political salience of AI regulation. Can this shift in communication strategy ultimately create the necessary policy demand for meaningful AI risk mitigation?


The Algorithm’s Shadow: Risks from Utility to Extinction

AI technology is a double-edged sword, offering immense potential alongside significant societal risks. Development accelerates, creating both opportunity and challenge. These advancements demand careful consideration of potential harms. Risks range from immediate ‘Proximate Harms’—job displacement and misinformation—to long-term ‘Existential Risk’. Job displacement concerns center on automation, while misinformation threatens public trust. Existential risk, though less discussed, pertains to scenarios where AI threatens humanity’s survival. Public perception is crucial; negative sentiment can hinder development, while complacency can accelerate danger. Concerns about job displacement demonstrably outweigh those about existential risk, indicating greater public salience of immediate harms. The interplay between perceived and actual risk shapes AI innovation, demanding ongoing dialogue and proactive mitigation.

Decoding the Human Response: Biases in the Machine of Perception

Public opinion regarding AI isn’t a singular construct, but a complex interplay of cognitive biases and emotional responses, rather than rational assessment. These biases shape perceptions of both opportunity and threat. Psychological theories offer frameworks for understanding these responses. Terror Management Theory suggests anxieties surrounding mortality influence perceptions of AI risk, leading individuals to perceive existential threats where they may not objectively exist, as AI can trigger unconscious concerns about control. Further contributing are specific cognitive biases. Exponential Growth Bias often results in underestimation of AI’s potential impact, as humans struggle with accelerating change. Conversely, the Self-Reference Effect prioritizes personally relevant information, causing individuals to focus on immediate impacts while overlooking broader implications.

Mapping the Sentiment Landscape: Methods for Measuring Public Concern

Researchers employ advanced statistical methods—PCA, GMM, and MaxDiff Analysis—to accurately assess public opinion. These techniques identify key themes, segment viewpoints, and prioritize preferences. MaxDiff analysis revealed that messaging focused on ‘Job Displacement’ achieved the highest average score, surpassing those focused on ‘Existential Risk’ and ‘Inequality’. This suggests greater public concern regarding immediate economic impacts compared to long-term risks. The ‘AI Safety Movement’ utilizes these methods to inform policy and advocate for responsible AI development, quantifying sentiment and identifying key concerns to tailor messaging and proposals.

Analysis of raw survey responses using principal component analysis reveals correlations between the responses and the variance explained by each resulting principal component.
Analysis of raw survey responses using principal component analysis reveals correlations between the responses and the variance explained by each resulting principal component.

Policy as Code: Guiding the Algorithm with Human Values

Effective policy formulation is crucial for mitigating risks and maximizing benefits. This informs AI regulation, establishing standards for responsible development and deployment. A robust framework facilitates innovation while safeguarding against harm. Addressing proximate harms—job displacement, misinformation, and mental health issues—requires proactive interventions. Public mobilization around these immediate concerns is a more effective strategy than focusing on hypothetical existential risks. Messaging concerning the welfare of children demonstrated the lowest standard deviation, suggesting a consistently effective approach. Key demographic segments represent opportunities for influencing policy. ‘Tech-Positive Urbanites’ (20.2 million) and ‘World Guardians’ (31.2 million) constitute substantial portions of the population receptive to AI policy changes. Understanding their values is vital for crafting persuasive narratives. Every patch is a philosophical confession of imperfection.

The study posits a pragmatic shift in AI risk communication, recognizing that abstract anxieties concerning existential threats fail to ignite meaningful public engagement. This approach aligns with a fundamental principle of systems analysis: understanding limitations requires probing boundaries. As G.H. Hardy observed, “The most profound knowledge is that which recognizes its own limitations.” The paper essentially argues that dwelling on distant, poorly understood risks obscures the immediate vulnerabilities—the ‘proximate harms’—that constitute the system’s current weaknesses. By focusing on demonstrable consequences like job displacement or psychological distress, the research advocates for exposing the system’s existing flaws to galvanize action and encourage effective AI governance.

What’s Next?

The shift in focus—from hypothesizing about superintelligence gone awry to documenting the creeping consequences of present-day AI—feels less like a solution and more like a controlled demolition of established risk narratives. The paper correctly identifies the psychological inertia favoring tangible threats, but doesn’t fully address the problem of adaptation. Humans are remarkably adept at normalizing even profound disruption. Each instance of job displacement, each algorithmic bias confirmed, becomes simply another data point in a new baseline, demanding ever-more-acute harms to register as ‘risk’.

Consequently, the field needs to move beyond simply identifying proximate harms and begin reverse-engineering the mechanisms of psychological accommodation. What cognitive shortcuts allow societies to absorb systemic shocks without meaningful course correction? What kinds of messaging actually break through that normalization? It’s not enough to show the cracks; one must understand why the structure doesn’t collapse.

Future work should also acknowledge the inherent messiness of governance. Policy isn’t built on neat classifications of ‘proximate’ versus ‘existential’—it’s forged in the friction between competing interests, bureaucratic inertia, and unforeseen consequences. Perhaps the most valuable research will involve not predicting the risks themselves, but mapping the failure modes of the systems intended to mitigate them. After all, the true test of a theory isn’t whether it predicts the expected, but whether it survives the unexpected.


Original article: https://arxiv.org/pdf/2511.06525.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-11-11 17:07