The AI Metacrisis: Are Large Models Worsening Global Instability?

Author: Denis Avetisyan


A new analysis argues that the rapid advancement of large AI models is exacerbating interconnected environmental, social, and linguistic crises, demanding a fundamental rethinking of natural language processing.

The accelerating concentration of capital within a handful of technology firms-a modern iteration of feudalism dubbed “technofeudalism”-is not merely an economic shift, but a systemic crisis <span class="katex-eq" data-katex-display="false"> \text{the metacrisis} </span> fueled by the disproportionate returns to scale inherent in digital infrastructure and artificial intelligence.
The accelerating concentration of capital within a handful of technology firms-a modern iteration of feudalism dubbed “technofeudalism”-is not merely an economic shift, but a systemic crisis \text{the metacrisis} fueled by the disproportionate returns to scale inherent in digital infrastructure and artificial intelligence.

This review examines how ‘Big AI’ amplifies the metacrisis and proposes a shift toward sustainable, ethically-aligned NLP practices centered on the value of natural language itself.

Despite technological advancements promising connection and progress, humanity faces a converging set of ecological, societal, and linguistic crises-a ‘metacrisis’ demanding urgent attention. This paper, ‘Big AI is accelerating the metacrisis: What can we do?’, argues that the current trajectory of large language models and ‘Big AI’ is not alleviating these pressures, but actively exacerbating them through unsustainable scalability and a prioritization of power over public good. We contend that the field of Natural Language Processing requires a fundamental re-evaluation, centering human flourishing and planetary boundaries. Can we redirect the immense potential of AI towards genuinely life-affirming solutions, or are we destined to amplify the very crises we seek to solve?


The Fractured Mirror: Systemic Risks in the Age of Intelligence

The current era is increasingly defined not by isolated crises, but by a ‘Metacrisis’ – a complex interplay between ecological breakdown, a crisis of meaning, and a fracturing of shared language and understanding. These aren’t separate problems happening concurrently; rather, each actively exacerbates the others in a dangerous feedback loop. Ecological damage erodes societal foundations, fueling feelings of helplessness and existential anxiety – contributing to the crisis of meaning. Simultaneously, the increasingly polarized and fragmented nature of communication, often driven by algorithmic amplification, hinders collective action and the ability to even define shared problems, thereby intensifying both ecological decline and the loss of collective purpose. This interwoven nature demands a shift in perspective, recognizing that solutions to any single challenge require addressing the systemic connections binding them together.

The pursuit of increasingly large artificial intelligence models, fueled by a drive for unchecked growth, is demonstrably exacerbating global systemic risks rather than offering solutions. Current trajectories indicate humanity has surpassed safe operating limits in six of the nine planetary boundaries – climate change, biosphere integrity, land-system change, freshwater withdrawal, biogeochemical flows, and novel entities – a situation directly linked to the resource demands of Big AI. The development and operation of these models necessitate massive computational power, driving demand for energy and materials, and generating substantial electronic waste. This escalating consumption isn’t simply a parallel issue to existing crises; it actively intensifies them, creating a feedback loop where the very tools intended to solve problems contribute to their worsening, and jeopardizing the stability of Earth’s life-support systems.

The escalating demand for computational power driving advancements in Big AI is placing immense strain on planetary resources, largely through the operation of massive data centers. While artificial intelligence models demonstrate incremental improvements in performance, these gains are increasingly achieved through exponentially greater consumption of energy, water, and rare earth minerals. This creates a paradoxical situation: technological progress is occurring at the cost of accelerating ecological degradation. Data centers, essential for training and running these complex algorithms, contribute significantly to greenhouse gas emissions, deplete freshwater supplies for cooling, and generate substantial electronic waste. The linear rate of AI performance improvement is thus overshadowed by the exponential growth in resource demands, highlighting a fundamental unsustainability at the heart of current AI development and reinforcing concerns about its long-term impact on the planet.

The Dissolving Consensus: Meaning, Language, and the Digital Echo Chamber

Large Language Models (LLMs) contribute to a deepening Meaning Crisis through several mechanisms. The readily available, often personalized, content generated by LLMs can foster addictive behaviors by providing constant stimulation and validation. Simultaneously, reliance on LLM outputs without independent verification undermines critical thinking skills as users may accept generated content at face value. Furthermore, the capacity of LLMs to generate convincing, yet false, information at scale enables the rapid spread of misinformation, making it increasingly difficult to discern truth from fabrication and eroding trust in established sources of information. The combination of these factors contributes to a sense of disorientation and a diminished capacity for meaning-making in the digital environment.

The contemporary Attention Economy operates through algorithmic amplification on digital platforms, directly incentivizing content creators and distributors to prioritize user engagement metrics – such as clicks, shares, and time spent – over factual accuracy or contextual depth. These algorithms are designed to maximize exposure to content that elicits strong emotional responses, regardless of its veracity, leading to the disproportionate visibility of sensationalized, polarized, or misleading information. This prioritization effectively rewards the spread of content optimized for engagement, rather than truth or nuanced understanding, and contributes to the erosion of reliable information ecosystems by diminishing the reach of more considered, factual reporting and analysis.

The global linguistic landscape is characterized by a significant imbalance, with approximately 90% of the world’s roughly 7,000 languages exhibiting characteristics that place them at risk of decline or extinction. These non-dominant languages are frequently non-bounded, meaning they lack clearly defined geographical or political borders; non-homogeneous, displaying considerable internal variation and dialectal differences; non-written, lacking a standardized writing system; and non-standardized, without established norms for grammar or usage. This contrasts sharply with dominant languages which benefit from institutional support, widespread literacy, and codified structures, creating a disparity in resources and opportunities for preservation and continued use.

Rewriting the Code: Ethics, Values, and the Reclamation of Language

Current Natural Language Processing (NLP) research frequently prioritizes performance metrics such as accuracy and efficiency, often neglecting broader societal impacts. Integrating ethical frameworks-including Data Feminism, which emphasizes power structures and systemic biases in data; the Ethics of Care, which prioritizes relationships and contextual understanding; and the Capability Approach, which focuses on expanding individuals’ freedoms and opportunities-offers a corrective. These frameworks necessitate a shift towards evaluating NLP systems not only on technical grounds but also on their effects on equity, inclusivity, and human well-being. Specifically, Data Feminism demands critical examination of data collection and labeling practices, the Ethics of Care encourages consideration of the specific contexts and vulnerabilities of affected communities, and the Capability Approach pushes for system design that supports human agency and opportunity, rather than perpetuating existing disadvantages.

Decolonizing methods within Natural Language Processing (NLP) involves critically examining and actively dismantling the systemic biases inherent in data collection, model training, and evaluation procedures that historically privilege dominant languages and perspectives. This necessitates moving beyond datasets largely composed of English and other high-resource languages to incorporate diverse linguistic data, including those from underrepresented and marginalized communities. Furthermore, it requires challenging the assumption of universality in linguistic features and algorithms, recognizing that linguistic structures and cultural contexts significantly impact model performance and fairness. Techniques such as incorporating indigenous knowledge systems, co-designing NLP tools with affected communities, and employing critical data audits are essential steps in mitigating bias and ensuring equitable outcomes for all language communities, fostering inclusivity and preventing the perpetuation of linguistic imperialism within AI systems.

The Association for Computational Linguistics (ACL) Code of Ethics outlines principles for responsible conduct in the field of Natural Language Processing. It stresses a commitment to the public good, requiring practitioners to consider the potential societal impacts of their work and prioritize beneficial applications. Accountability is central, with the code advocating for transparency in data collection and model development, as well as a willingness to address and mitigate harms resulting from NLP technologies. Specific tenets include respecting privacy, avoiding bias, and promoting inclusivity, alongside a responsibility to disclose limitations and potential risks associated with deployed systems. Adherence to the code is intended to foster trust and ensure that NLP advancements align with ethical values and societal well-being.

Beyond the Growth Imperative: Reframing AI’s Role in a Finite World

The trajectory of artificial intelligence development faces a significant risk of being steered away from broadly beneficial outcomes due to the increasing influence of concentrated corporate power, a phenomenon known as corporate capture. This isn’t simply about lobbying or marketing; it represents a systemic bias where the priorities of a few powerful companies – maximizing profit and maintaining market dominance – overshadow considerations for public good, fairness, and long-term sustainability. Research indicates that these corporations are actively shaping the AI research agenda, funding projects aligned with their business interests, and influencing regulatory frameworks to minimize oversight. This concentrated control over AI’s development could lead to technologies that exacerbate existing inequalities, prioritize surveillance over privacy, and automate jobs without considering the societal impact, ultimately hindering the potential for AI to address critical global challenges and build a genuinely equitable future.

The current global landscape demands a re-evaluation of perpetual growth as the primary societal metric, as evidenced by the breaching of six out of nine planetary boundaries defining a safe operating space for humanity. This ‘metacrisis’ – a convergence of ecological, social, and political instabilities – necessitates a fundamental shift towards systems prioritizing sustainability and equity, rather than solely focusing on economic expansion. Continuing on a trajectory of unchecked growth risks accelerating environmental degradation, exacerbating social inequalities, and ultimately undermining the long-term viability of both human civilization and the natural world. Consequently, innovative approaches are needed that decouple societal well-being from resource consumption and prioritize the restoration of ecological balance, fostering a future where prosperity is measured by resilience, inclusivity, and planetary health.

A truly resilient future hinges not only on technological advancement, but also on the preservation of linguistic diversity and the cultivation of robust critical thinking skills. The erosion of languages represents a loss of unique worldviews, traditional ecological knowledge, and cognitive frameworks crucial for adapting to complex challenges. Simultaneously, fostering critical thinking-the ability to analyze information objectively and form reasoned judgments-empowers individuals to resist manipulation, question dominant narratives, and participate meaningfully in shaping a sustainable future. Without these cognitive tools, societies risk becoming overly reliant on algorithmic solutions and susceptible to misinformation, hindering the development of genuinely equitable and ecologically sound systems. Prioritizing these often-overlooked aspects of human capacity is therefore paramount to navigating the metacrisis and building a future where innovation serves collective well-being, not simply economic growth.

The pursuit of ever-larger language models, as detailed in the article, feels less like problem-solving and more like a systematic probing of systemic limits. One pauses and asks: ‘What if the bug isn’t a flaw, but a signal?’ Marvin Minsky articulated this sentiment when he said, “You can’t solve a problem with the same thinking that created it.” This rings true as the current trajectory of Big AI, accelerating the metacrisis through its demands on planetary boundaries and its contribution to technofeudalism, necessitates a fundamental re-evaluation of natural language processing. The article’s call for prioritizing the public good isn’t a correction of course, but an acknowledgment that the current methods are reaching their inherent limits, demanding a new frame of reference.

What Lies Ahead?

The presented analysis doesn’t offer solutions, and that’s deliberate. To search for ‘fixes’ implies the system is simply broken, when it’s more accurate to say it’s revealing its underlying operating principles-principles that were never intended for broad daylight. The convergence of ecological limits, socioeconomic fracture, and the peculiar logic of large language models isn’t a bug; it’s a feature of complex systems pushed to their extremes. The challenge isn’t to prevent further acceleration, but to map the resulting chaos with sufficient fidelity to anticipate-and perhaps, selectively steer-the inevitable bifurcations.

Future inquiry must abandon the pretense of neutral technological progress. Natural Language Processing, in particular, needs a rigorous internal audit. The field’s obsession with scaling performance, divorced from ecological or social consequence, is a textbook example of optimization without understanding the objective function. A shift toward ‘slow AI’-systems designed for resilience, interpretability, and minimal resource consumption-isn’t a regression, but a necessary act of reverse engineering.

Ultimately, the true metric of success won’t be benchmark scores or parameter counts, but the ability to re-center natural language itself. Not as a mere input for algorithmic processing, but as a fundamentally valuable, embodied practice-a tool for cultivating shared understanding, ecological awareness, and a more nuanced perception of the metacrisis unfolding around us. The black box is opening, and what emerges may not be what anyone expected.


Original article: https://arxiv.org/pdf/2512.24863.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-01-01 10:40