Author: Denis Avetisyan
As AI development accelerates, a new focus on the agency of AI workers is crucial to navigate the geopolitical ramifications of this rapidly evolving technology.
This review argues that participatory design and fostering critical reflection among AI workers are essential to address limitations in formal AI governance and promote responsible international political economy.
Despite increasing calls for AI governance, existing frameworks often fail to address the geopolitical dynamics shaping its development and deployment. This paper, ‘AI Workers, Geopolitics, and Algorithmic Collective Action’, argues that focusing solely on top-down regulation is insufficient, and instead proposes centering the agency of AI workers as crucial agents of change. By leveraging methods of participatory design, the research posits that fostering critical reflection and enabling ‘algorithmic collective action’ among these workers can unlock pathways toward more responsible and just AI systems. Could engaging those building AI be the key to navigating its complex geopolitical implications and ensuring a more equitable technological future?
The Shifting Ground of Power
Conventional frameworks within International Relations, historically focused on the interactions between nation-states, are increasingly challenged by the ascendance of non-state actors, particularly those pioneering artificial intelligence. These entities – technology corporations, research collectives, and even individual developers – now possess capabilities that rival, and in some cases surpass, those of traditional geopolitical players. The ability to shape information landscapes, develop autonomous systems with strategic implications, and control critical technological infrastructure grants them a novel form of power that is not easily categorized within existing theories of state-centric competition. This presents a significant analytical gap, as traditional models struggle to explain the influence of actors who operate outside the conventional bounds of diplomacy, military strength, and economic coercion, demanding a fundamental reassessment of how power is distributed and exercised in the 21st century.
The accelerating capacity of non-state entities, particularly those developing artificial intelligence, to exert geopolitical influence represents a fundamental disruption of established international norms. Historically, geopolitical power has been largely confined to nation-states, but increasingly, these organizations possess resources – computational power, data control, and technological innovation – that rival or even surpass those of many countries. This manifests not through traditional military strength, but through control over critical infrastructure, information dissemination, and emerging technologies, allowing them to shape narratives, disrupt economies, and influence political outcomes. The speed at which this power is consolidating presents a significant challenge to existing frameworks of international relations, demanding a re-evaluation of how power is defined, distributed, and managed in the 21st century. This isn’t simply about technological advancement; it’s a reshaping of the very landscape of global influence, where non-state actors are no longer peripheral players, but increasingly central forces.
Contemporary geopolitical analysis must evolve beyond traditional, state-centric models to fully encompass the burgeoning influence of non-state actors. For decades, international relations theory primarily focused on the interactions between nations, assuming states held a virtual monopoly on power projection. However, the capacity of entities like large technology corporations and sophisticated AI development firms to shape global events – through control of information, critical infrastructure, or even autonomous systems – is rapidly challenging this assumption. A revised framework acknowledges that power is no longer solely derived from military strength or economic dominance, but also from control over key technologies and the ability to influence narratives on a global scale. Consequently, understanding the complex interplay between states and these emergent actors is crucial for navigating the shifting sands of international politics and predicting future geopolitical landscapes.
The Agents of Influence: AI Workers
AI workers, encompassing roles from data scientists and machine learning engineers to AI ethicists and deployment specialists, exercise political agency through their direct involvement in the creation and implementation of increasingly powerful technologies. These individuals are not merely technical implementers; their decisions regarding algorithm design, data selection, feature engineering, and system deployment fundamentally shape the capabilities and impacts of AI systems. This influence extends beyond purely technical considerations, as AI workers routinely make choices that reflect, and potentially reinforce, specific values, biases, and political priorities embedded within the technology. Consequently, they function as crucial nodes in translating technical possibilities into tangible geopolitical and societal outcomes, effectively wielding influence over the direction of technological development and its subsequent applications.
Artificial Intelligence development is fundamentally driven by the specialized expertise and iterative decision-making of AI workers – encompassing roles from data scientists and machine learning engineers to algorithm designers and AI ethicists. These professionals directly determine the technical capabilities and limitations of AI systems, influencing their deployment in areas critical to geopolitical strategy, such as defense, intelligence gathering, economic forecasting, and resource allocation. Consequently, the choices made by AI workers regarding model architecture, data selection, and algorithmic prioritization directly impact a nation’s ability to project power, maintain security, and compete economically on the global stage. The resulting AI capabilities, therefore, are not simply technological advancements, but strategic assets shaped by the specific contributions of this workforce.
The influence of AI workers extends beyond fulfilling corporate mandates due to their direct involvement in developing and deploying technologies with broad societal impact. Their decisions regarding algorithm design, data selection, and system implementation contribute to outcomes that shape public opinion, access to resources, and even political processes. This is particularly evident in areas like content recommendation systems, automated decision-making in social services, and the development of surveillance technologies. Consequently, the choices made by these workers can inadvertently or intentionally produce significant alterations in social norms, power dynamics, and political landscapes, operating outside the traditional scope of corporate control and potentially leading to unforeseen consequences for governance and civic life.
Emerging connections between AI workers and algorithmic collective action represent a novel approach to influencing AI development and deployment. This strategy posits that coordinated action by individuals possessing specialized technical knowledge – specifically, those involved in the design, training, and implementation of AI systems – can exert control over algorithmic outcomes. Advocates of this bottom-up intervention model argue that direct engagement with AI workers offers a more effective means of shaping AI’s societal impact than traditional top-down regulatory approaches or reliance on corporate self-regulation, as these workers possess unique insight into system vulnerabilities and potential for modification.
The Concentration of Power and the Governance Gap
The implementation of neoliberal policies, characterized by deregulation, privatization, and the reduction of social safety nets, has directly contributed to the consolidation of power within a small number of artificial intelligence companies. These policies facilitated the accumulation of capital and resources, enabling significant investment in AI research and development, and ultimately creating substantial network effects that favor large incumbents. This concentration of technological and economic power extends beyond domestic markets, as these companies establish data centers and exert influence over international standards bodies, thereby amplifying their geopolitical reach and creating dependencies among nations. The resulting asymmetry allows these firms to shape technological trajectories and influence policy decisions to their advantage, potentially hindering competition and innovation globally.
Panoptic power, as it relates to concentrated AI development, manifests through the pervasive collection and analysis of data enabling comprehensive surveillance capabilities. These systems leverage techniques like facial recognition, behavioral prediction, and sentiment analysis – often applied to both users and AI workers – to monitor activities and preemptively identify potential deviations from established norms. Data is aggregated from diverse sources including social media, IoT devices, and internal corporate monitoring, creating detailed profiles that facilitate control and potentially limit dissent. The resulting asymmetry of information allows dominant AI companies to exert significant influence over individuals and societal behaviors, effectively reinforcing their power and minimizing accountability.
Current AI governance frameworks frequently fail to adequately address the power imbalance created by concentrated control within a small number of companies. Many existing regulations focus on data privacy or algorithmic transparency, but lack the scope to challenge the fundamental asymmetry of power allowing these dominant entities to shape technological development and deployment. The emphasis on voluntary self-regulation and industry standards often proves ineffective, as these mechanisms are susceptible to capture by those with the most resources. Furthermore, frameworks designed for more diffuse technological landscapes struggle to account for the network effects and economies of scale that exacerbate power concentration in AI, resulting in policies that are either easily circumvented or fail to address the root causes of the imbalance.
Effective AI governance requires the integration of critical reflection and participatory methodologies, such as Participatory Design, to address the limitations of existing formal governance structures. These approaches prioritize inclusive engagement with stakeholders, including AI workers, to identify and mitigate potential biases and power imbalances embedded within AI systems and their development processes. Complementing top-down regulatory frameworks with bottom-up interventions empowers those directly involved in AI creation and deployment, fostering a more nuanced understanding of real-world impacts and enabling the development of more equitable and practical governance strategies. This combined approach recognizes that sustainable AI governance necessitates both formal rules and the active participation of those most affected by them.
Navigating the Risks: From Safety to Lethal Autonomy
The escalating sophistication of artificial intelligence necessitates a robust and forward-thinking approach to AI safety. As AI systems gain increased autonomy and are deployed in critical infrastructure, healthcare, and defense, the potential for unintended consequences – ranging from algorithmic bias and privacy violations to system failures with catastrophic repercussions – grows exponentially. Proactive measures are therefore crucial, extending beyond reactive problem-solving to encompass preventative design principles, rigorous testing protocols, and continuous monitoring of AI performance. This includes developing techniques for verifying and validating AI systems, ensuring their robustness against adversarial attacks, and establishing clear lines of accountability for their actions. The field increasingly emphasizes not simply containing risks, but fostering explainable AI and ensuring alignment between AI goals and human values, recognizing that a failure to prioritize safety could erode public trust and stifle the beneficial development of this transformative technology.
The emergence of Lethal Autonomous Weapons Systems (LAWS) introduces a unique security challenge predicated on Algorithmic Collective Action – a phenomenon where numerous, independently programmed machines coordinate to achieve a shared objective, often without direct human intervention. Unlike traditional weapon systems controlled by a central command, LAWS operate through decentralized decision-making, relying on complex algorithms and machine learning to identify, track, and engage targets. This distributed intelligence presents significant hurdles for accountability and control; attributing responsibility for unintended consequences becomes increasingly difficult when actions arise not from a single operator, but from the emergent behavior of a collective. Furthermore, the speed and scale at which these systems can operate, combined with their potential for unforeseen interactions, necessitates a fundamental re-evaluation of existing international humanitarian law and arms control frameworks, as the very nature of warfare shifts toward automated, algorithmically-driven conflict.
Understanding artificial intelligence requires moving beyond purely technical considerations and embracing the insights of Science and Technology Studies. This interdisciplinary field reveals that AI isn’t simply ‘discovered’ but actively constructed – shaped by social values, political priorities, and economic forces at every stage of development. Consequently, the perceived risks and benefits of AI, including those associated with autonomous systems, aren’t inherent properties of the technology itself, but rather outcomes of specific design choices and the contexts in which these systems are deployed. By examining the often-invisible work of coders, policymakers, and end-users, scholars in this area demonstrate how assumptions about intelligence, agency, and responsibility are embedded within AI systems, influencing their operation and ultimately, their societal impact. This perspective is crucial for anticipating unintended consequences and fostering a more equitable and accountable approach to technological innovation, recognizing that AI reflects – and often amplifies – existing power structures.
Effective AI governance demands a sustained commitment to ethical principles and robust human oversight, not as constraints on innovation, but as integral components of responsible development. While existing regulatory frameworks offer a foundational structure, a truly comprehensive approach necessitates proactive interventions that recognize the crucial role of individuals working directly with these systems. Empowering AI workers – those involved in data labeling, model training, and system maintenance – as agents of change is paramount; their insights can illuminate potential biases, vulnerabilities, and unintended consequences often missed by automated assessments. This shift moves beyond simply controlling AI to fostering a collaborative environment where human expertise complements artificial intelligence, ensuring that technological advancement aligns with societal values and promotes equitable outcomes, even as AI capabilities continue to expand.
The pursuit of effective AI governance, as detailed in the paper, often becomes entangled in complex regulatory frameworks. However, this work rightly centers attention on a more direct, and arguably more potent, approach: engaging those within the system – the AI workers themselves. This focus aligns with John von Neumann’s observation, “If people do not believe that mathematics is simple, it is only because they do not realize how elegantly nature operates.” Similarly, the paper posits that solutions to the geopolitical challenges of AI won’t emerge from abstract principles alone, but from understanding the practical realities experienced by those building and deploying these systems. Fostering critical reflection and collective action amongst AI workers offers a path toward a more responsive and ultimately, simpler, governance model.
Where Do We Go From Here?
The pursuit of formal AI governance, as often conceived, risks becoming a labyrinth of regulations addressing symptoms, not causes. This work suggests a shift in focus – not toward more rules, but toward engaging those actually building these systems. If the complexities inherent in international political economy are to be meaningfully addressed by AI development, a reliance on ‘AI workers’ – those engaged in the messy realities of implementation – appears less a solution and more a necessary condition. It’s a pragmatic concession: control is an illusion; influence, perhaps, is not.
However, participatory design is not a panacea. The assumption that fostering ‘critical reflection’ will automatically translate into collective action is optimistic, to say the least. Reflection, divorced from tangible power, remains merely that. The field must confront the hard question: what mechanisms can reliably translate awareness of geopolitical implications into altered development pathways? Simply asking developers to be more considerate is insufficient.
Future research should therefore move beyond descriptive accounts of potential harms and focus on the structural conditions that enable – or disable – algorithmic collective action. The goal isn’t to perfect the system, but to understand its irreducible limitations. To acknowledge that, ultimately, the most elegant solution is often the simplest one: build less, understand more, and accept that some problems are, and perhaps should remain, intractable.
Original article: https://arxiv.org/pdf/2511.17331.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Mark Wahlberg Battles a ‘Game of Thrones’ Star in Apple’s Explosive New Action Sequel
- Where Winds Meet: March of the Dead Walkthrough
- LTC PREDICTION. LTC cryptocurrency
- Physical: Asia fans clap back at “rigging” accusations with Team Mongolia reveal
- Invincible Season 4 Confirmed to Include 3 Characters Stronger Than Mark Grayson
- LINK PREDICTION. LINK cryptocurrency
- Top Disney Brass Told Bob Iger Not to Handle Jimmy Kimmel Live This Way. What Else Is Reportedly Going On Behind The Scenes
- Marvel Cosmic Invasion Release Date Trailer Shows Iron Man & Phoenix
- Dragon Ball Meets Persona in New RPG You Can Try for Free
- Assassin’s Creed Mirage: All Stolen Goods Locations In Valley Of Memory
2025-11-24 11:48