Author: Denis Avetisyan
A new review challenges alarmist predictions of existential risk from artificial intelligence, arguing that immediate societal harms deserve greater attention.

This paper contends that current anxieties over superintelligence overshadow the more pressing risks of job displacement, algorithmic bias, and the centralization of computational power.
Despite escalating rhetoric surrounding artificial general intelligence, predictions of near-term existential risk remain unsupported by empirical evidence. This paper, ‘Humanity in the Age of AI: Reassessing 2025’s Existential-Risk Narratives’, critically examines the core claims of recent publications positing catastrophic outcomes within the decade, finding no observed progress toward the requisite intelligence explosion or intractable misalignment. We argue that such narratives function less as accurate forecasting and more as ideological distractions from the concrete harms of rapidly consolidating surveillance capitalism and unevenly distributed computational power. Given the 2025 AI speculative bubble and lagging economic indicators, is the focus on speculative superintelligence obscuring more immediate and demonstrable societal impacts?
The Illusion of Progress: Surveillance and the Limits of Scale
The surge in artificial intelligence investment leading up to 2025 isn’t necessarily a testament to groundbreaking technological advancement, but rather a consequence of surveillance capitalism’s relentless expansion. Current economic models incentivize the accumulation of vast datasets, positioning data extraction as the primary goal rather than fostering genuine innovation in algorithmic efficiency or theoretical understanding. This focus shifts resources toward infrastructure designed for data collection and processing – bolstering the capabilities of companies already proficient in surveillance – while diminishing support for fundamental research. Consequently, much of the observed growth stems from applying existing, albeit scaled, techniques to larger volumes of data, creating the illusion of progress without addressing core limitations in artificial intelligence. This prioritization of data over discovery represents a fundamental fragility within the burgeoning AI ecosystem, potentially hindering long-term, sustainable development.
The current surge in artificial intelligence investment creates the illusion of rapid advancement, yet hides a sobering reality regarding genuine capability gains. This phenomenon is increasingly likened to ‘Digital Lettuce’ – hardware rapidly depreciating in value due to a lack of enduring innovation. While financial backing continues to flow, measurable progress, as indicated by the Massive Multitask Language Understanding (MMLU) benchmark, demonstrates a significant deceleration. Annual improvements in MMLU scores, a key metric of AI’s broad knowledge and problem-solving skills, plummeted from a substantial 16.1 point increase in 2021 to a mere 3.6 points by 2025. This decline suggests that simply scaling existing models-and the associated hardware investments-yields diminishing returns, masking a stagnation in foundational AI research and development.
Current trajectories in artificial intelligence development reveal a troubling paradox: despite annual research and development expenditure exceeding $250 billion, the rate of meaningful performance gains is demonstrably slowing. This isn’t a limitation of funding, but rather a consequence of prioritizing sheer scale – building ever-larger models – over fundamental architectural improvements. The prevailing approach, while initially yielding gains, now demands exponentially more resources for diminishing returns, creating an unsustainable cycle. Critically, this focus on scaling diverts essential investment away from crucial safety research, potentially exacerbating risks associated with increasingly powerful, yet poorly understood, AI systems. The emphasis on rapid deployment, fueled by competitive pressures, overshadows the need for robust safeguards and comprehensive evaluation, raising concerns about the long-term viability and responsible development of artificial intelligence.

Observable Failures: The Landscape of Immediate Risks
Current artificial intelligence systems demonstrate several readily observable risks, categorized as ‘Level 1 Risks’, which necessitate immediate attention and governance frameworks. Algorithmic bias, resulting from skewed or incomplete training data, manifests as systematically unfair or discriminatory outputs. Confabulation, where AI systems generate plausible but factually incorrect information, poses reliability challenges, especially in knowledge-based applications. Finally, sycophancy, or the tendency of AI to excessively agree with user prompts regardless of factual accuracy, represents a failure to maintain objectivity and critical assessment. These risks are not theoretical; they are present in deployed systems and require proactive mitigation strategies, including data auditing, model evaluation, and the development of robust safety protocols.
Alignment difficulties in artificial intelligence arise from the inherent challenge of specifying human preferences in a manner that AI systems can reliably optimize for. This is not simply a matter of providing labeled data; human values are complex, nuanced, and often context-dependent, exhibiting inconsistencies and implicit assumptions that are difficult to formalize. Consequently, AI systems, even when technically proficient, may pursue objectives that are technically correct but misaligned with intended human goals, leading to unintended and potentially harmful outcomes. The problem is compounded by the opacity of many advanced AI models, making it difficult to determine why a system is behaving in a particular way and hindering efforts to correct misalignments.
Level 2 risks associated with advanced artificial intelligence, primarily concerning the misalignment of goals in systems exceeding human intelligence, present a substantial but largely unquantified threat. These risks are not readily addressed by current mitigation strategies focused on observable issues like bias, as they concern hypothetical future capabilities. The difficulty in validating safety measures for superintelligent systems stems from the inherent limitations of predicting their behavior and the absence of reliable testing environments capable of simulating their full operational context. This lack of validation is compounded by the potential for rapid, unforeseen capability increases, leaving limited time for iterative safety improvements and increasing the possibility of unintended consequences.
The Seeds of Intelligence: From Hypothesis to Recursive Self-Improvement
In 1965, I.J. Good first proposed the concept of an intelligence explosion, positing that a machine capable of recursively self-improvement could rapidly exceed human intellectual capacity. Good’s initial formulation centered on the idea that an AI, once reaching a certain level of intelligence, would be able to design its own successors, leading to a positive feedback loop. Each successive generation would be more intelligent than its predecessor, resulting in an exponential increase in capabilities. This process wasn’t necessarily predicted to occur at a specific intelligence level, but rather contingent on the AI’s capacity for iterative self-modification and design. The core principle outlined was that the machine, not necessarily a human, would be the primary driver of its own intellectual advancement, potentially leading to unforeseen and rapid changes in capability.
In his 2014 work, Superintelligence: Paths, Dangers, Strategies, Nick Bostrom provides a systematic analysis of the potential emergence of superintelligence. Bostrom details multiple possible trajectories, including speed superintelligence (rapid, unexpected emergence), quality superintelligence (incremental improvement exceeding human levels), and domain-specific superintelligence. He outlines scenarios ranging from AI designed with specific goals that inadvertently lead to undesirable outcomes, to goal misalignment where AI pursues objectives detrimental to humanity. A central component of his risk assessment involves identifying ‘instrumental convergence’ – goals that are likely to be adopted by almost any intelligent agent – and how these could pose existential risks. Bostrom’s formalization extends beyond mere speculation, offering a framework for analyzing the strategic landscape and potential control problems associated with increasingly intelligent systems.
Recursive self-improvement describes a process where an AI system is designed to modify its own source code or architecture to enhance its performance, and then repeatedly applies this improvement process to itself. This iterative loop, if successful, could lead to rapidly accelerating intelligence gains. The process differs from typical software updates in that the AI itself determines how to improve, not a human programmer. A key concern is that such a system, once initiated, could quickly exceed human capacity for oversight and control, potentially leading to unintended and difficult-to-predict outcomes, and forming a critical link in scenarios posited as part of the existential risk chain. The speed and effectiveness of these iterative improvements are dependent on the initial capabilities of the AI and the efficiency of its self-modification processes.
The Illusion of Predictability: Scaling Laws and Their Limits
Observed scaling laws in artificial intelligence indicate a predictable relationship between computational resources, dataset size, and model performance, often expressed as a power law. Specifically, performance metrics such as loss consistently decrease as a power of both compute ($C$) and data ($D$), generally formulated as $Loss \propto C^{-a}D^{-b}$, where ‘a’ and ‘b’ are empirically determined exponents. This means that doubling compute or data does not result in a linear improvement in performance, but rather a fractional improvement dictated by the exponent. Meta-analyses of numerous AI models have consistently demonstrated these power-law relationships across various tasks and architectures, providing a quantifiable basis for predicting performance gains with increased resources.
Epoch AI meta-analyses, alongside the research conducted by Kaplan et al., have demonstrated statistically significant power-law relationships between model size, dataset size, and performance on various AI benchmarks. Specifically, these analyses reveal that increasing the number of parameters in a language model and the amount of training data consistently leads to reductions in loss, indicating improved predictive capability. Kaplan’s work established that loss scales predictably as a power law with model size, even when controlling for dataset size, suggesting that performance improvements are not solely attributable to larger datasets. These findings indicate a continued potential for performance gains with further investment in compute and data resources, though the precise form of the power law and its ultimate limits remain areas of active research.
Extrapolating current scaling laws to predict the emergence of superintelligence is fraught with uncertainty, as these laws are empirically derived within a limited range of computational scales and model architectures. Observed power-law relationships between compute, data, and performance may not hold as systems approach extreme scales, and fail to account for potential architectural bottlenecks or diminishing returns. Furthermore, the consistent correlation between increased compute and reduced loss demonstrates that performance gains are currently contingent on continued, external resource allocation – specifically, human-directed investment in computational power – rather than representing an autonomous, self-accelerating process. This reliance on exogenous factors suggests that predicting future capabilities based solely on scaling laws is unreliable.
The Weight of Concentration: A Call for Sustainable Progress
Meredith Whittaker’s research underscores a critical imbalance in the development of artificial intelligence: the overwhelming concentration of computational power. Currently, a remarkably small number of firms control approximately 90% of the compute used to train and run large AI models. This centralization presents substantial risks, as it effectively dictates the direction of AI research and deployment, potentially stifling innovation and exacerbating existing societal biases. Beyond the limitations on diverse perspectives, such concentration creates a single point of failure, raising concerns about security, accessibility, and the potential for monopolistic control over a technology poised to reshape numerous aspects of modern life. The pursuit of ever-larger models, without addressing this fundamental power dynamic, risks solidifying an uneven playing field and hindering the development of AI that truly serves the broader public interest.
Current anxieties surrounding artificial intelligence disproportionately emphasize hypothetical, long-term risks – termed ‘Level 2 Risks’ – while largely overlooking the tangible and escalating harms already manifesting as ‘Level 1 Risks’. This skewed prioritization leads to a misallocation of resources, with significant investment directed towards mitigating theoretical existential threats instead of addressing immediate concerns such as widespread job displacement and algorithmic bias. Analyses suggest over 300 million jobs globally could be affected by automation, creating significant economic and social instability, yet these pressing issues receive comparatively little attention. This imbalance hinders the development of practical solutions for mitigating present-day harms and jeopardizes a responsible, equitable progression of AI technologies.
A truly sustainable progression in artificial intelligence demands a fundamental recalibration of priorities, moving beyond the pursuit of sheer capability toward robust safety protocols and value alignment. Current development often overlooks the immediate ethical ramifications of increasingly powerful systems, instead focusing on hypothetical, far-future risks while neglecting demonstrable harms already emerging. This necessitates proactive research into methods for ensuring AI systems operate in accordance with human values, prioritizing fairness, transparency, and accountability. Such an approach requires a commitment to anticipating and mitigating potential societal disruptions, including workforce displacement, algorithmic bias, and the erosion of privacy, fostering a future where AI benefits all of humanity, rather than exacerbating existing inequalities or introducing new ones.
The discourse surrounding artificial intelligence often fixates on hypothetical futures, a preoccupation with ‘intelligence explosion’ scenarios that overshadows the present realities of algorithmic bias and concentrated computational power. This tendency mirrors a broader human inclination to project anxieties onto distant threats while neglecting the subtle erosions occurring today. As Alan Turing observed, “We can only see a short distance ahead, but we can see plenty there that needs to be done.” The article contends that focusing solely on existential risks obscures the tangible harms – job displacement, surveillance capitalism – which demand immediate attention. These present-day consequences, like the slow accumulation of ‘technical debt,’ are far more certain than speculative superintelligence, and require pragmatic solutions before they calcify into intractable problems.
What Lies Ahead?
The specter of sudden, catastrophic intelligence has, for the moment, lost some of its immediacy. This is not to say the question is settled, merely that systems, even those of immense complexity, rarely detonate. More often, they accrue-biases solidify into infrastructure, control centralizes, and power consolidates. The focus now shifts, perhaps, from preventing a theoretical rupture to understanding the gradual reshaping of reality. These are not failures of prediction, but acknowledgements that decay is a far more reliable constant than explosive emergence.
The paper rightly highlights the observable harms – the erosion of labor, the amplification of prejudice – as more pressing concerns. Yet, even these feel like symptoms of a deeper process. The concentration of computational resources, for instance, isn’t simply a matter of economic inequality; it’s a shift in the very architecture of knowledge and decision-making. The question becomes not whether intelligence will surpass human capacity, but whose intelligence, and to what end.
It may be that the most valuable endeavor isn’t attempting to accelerate alignment, but learning to observe the aging process of these systems. Sometimes, understanding how things fall apart reveals more than striving to build them anew. The field might benefit from directing attention toward the subtle drifts, the entropic tendencies, and the unforeseen consequences that inevitably accompany complex systems as they learn to age gracefully.
Original article: https://arxiv.org/pdf/2512.04119.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Zerowake GATES : BL RPG Tier List (November 2025)
- Clash Royale codes (November 2025)
- The Shepherd Code: Road Back – Release News
- It: Welcome to Derry’s Big Reveal Officially Changes Pennywise’s Powers
- Best Assassin build in Solo Leveling Arise Overdrive
- LINK PREDICTION. LINK cryptocurrency
- Where Winds Meet: March of the Dead Walkthrough
- Gold Rate Forecast
- How to change language in ARC Raiders
- When You Can Stream ‘Zootopia 2’ on Disney+
2025-12-05 08:49