Author: Denis Avetisyan
A new comparative analysis reveals fundamentally different approaches to AI governance across the globe, driven by divergent institutional priorities and understandings of ‘AI safety’.
This study employs semantic network analysis to map how securitisation theory and institutional logics shape AI policy in the United States, European Union, and China.
Despite widespread rhetorical convergence around shared principles like safety and accountability, artificial intelligence governance remains strikingly fragmented across major geopolitical jurisdictions. This disparity motivates ‘From Abstract Threats to Institutional Realities: A Comparative Semantic Network Analysis of AI Securitisation in the US, EU, and China’, a study demonstrating that differing institutional logics fundamentally shape how ‘AI safety’ is defined and enacted-with the EU juridifying AI as a certifiable product, the US operationalising it as an optimisable system, and China governing it as socio-technical infrastructure. This research reveals a condition of ‘structural incommensurability’ where terminological overlap masks ontological divergence, suggesting that coordination failures stem not from conflicting values, but from a lack of shared understanding of the object being governed. If global AI governance is to move beyond mere ethical aspirations, can we develop frameworks that account for these deeply embedded, yet often unacknowledged, institutional differences?
The Foundations of Governance: Institutional Logics and AI
The development and regulation of artificial intelligence isn’t a spontaneous process unfolding on a blank slate; instead, it’s deeply embedded within, and actively shaped by, pre-existing institutional logics. These logics – the established values, beliefs, and norms that guide organizational behavior – provide the foundational frameworks through which AI governance is understood and implemented. Consequently, approaches to AI regulation aren’t purely technical or objective, but rather reflect the prevailing societal and political priorities already in place. Existing frameworks concerning market competition, data privacy, national security, and even fundamental rights all influence how AI is perceived, categorized, and ultimately governed, meaning the future of AI is inextricably linked to the past and present structures of power and influence.
The development of artificial intelligence governance is deeply influenced by two historically dominant, yet often conflicting, philosophies. One approach, rooted in Market-Liberal Logic, champions rapid innovation and minimal regulatory interference, positing that unfettered development fosters beneficial competition and ultimately delivers the greatest societal gains. Conversely, the Holistic State Logic prioritizes societal stability and control, advocating for robust oversight and preemptive regulation to mitigate potential risks associated with powerful AI systems. This tension manifests in ongoing debates surrounding data privacy, algorithmic bias, and the ethical implications of autonomous technologies, as policymakers grapple with balancing the promotion of innovation against the need for responsible development and public safety. The resulting governance frameworks frequently reflect a compromise between these two logics, attempting to encourage progress while simultaneously addressing legitimate concerns about the potential societal impacts of increasingly sophisticated AI.
The development of artificial intelligence governance is frequently hampered by inherent contradictions stemming from competing philosophical viewpoints. Existing institutional frameworks, rooted in either market-liberal principles or holistic state control, clash when applied to the unique challenges posed by AI. This results in regulatory proposals that oscillate between fostering rapid innovation with minimal oversight and implementing stringent controls to mitigate potential risks – a tension that complicates the creation of effective and universally accepted oversight mechanisms. Consequently, policymakers face difficulty in establishing clear boundaries for AI development, navigating debates over data privacy, algorithmic bias, and accountability, ultimately leading to fragmented and often inconsistent regulatory landscapes. The resulting complexities demand careful consideration of these competing logics to ensure AI governance fosters both progress and societal wellbeing.
Framing the Challenge: Risk, Security, and the Construction of AI
The portrayal of Artificial Intelligence as either a catalyst for economic growth or an existential risk directly correlates with the types of policy responses it receives from governing bodies. Framing AI as an economic opportunity typically results in policies focused on investment in research and development, workforce training, and the facilitation of innovation. Conversely, framing AI as an existential threat-emphasizing potential job displacement, autonomous weapons systems, or loss of control-tends to elicit policies centered on risk mitigation, regulation, and control mechanisms, including preemptive restrictions on development or deployment. This dynamic demonstrates that policy is not solely driven by objective assessments of AI’s capabilities, but is significantly shaped by the subjective framing of the technology and the perceived urgency of associated risks or benefits.
Securitization Theory, originating in international relations, explains how issues are presented as existential threats requiring immediate, exceptional action beyond normal political processes. Applied to AI, this involves framing potential risks-such as algorithmic bias or autonomous weapons-not merely as problems to be managed, but as urgent security crises. This framing allows for the justification of extraordinary measures, including increased surveillance, preemptive regulation, and the allocation of significant resources to AI safety and control, often bypassing standard legislative scrutiny or public debate. The process involves identifying an ‘other’ – a perceived threat emanating from AI – and constructing a narrative of potential catastrophe to legitimize these exceptional responses, effectively shifting the issue from routine policy to a matter of national or global security.
The concept of a ‘Dispositif’, as utilized in critical theory, describes a heterogeneous ensemble of elements – including discourses, institutions, regulations, architectural arrangements, and scientific practices – that functions as a strategic network. In the context of AI, this means that the perception of AI as a problem isn’t solely driven by objective risk, but is actively constructed through a complex interplay of these elements. For example, governmental funding directed towards AI safety research, legal frameworks addressing algorithmic bias, and media portrayals of AI’s potential harms all contribute to defining AI as a domain requiring specific forms of control and intervention. This network doesn’t simply respond to a pre-existing problem; it actively shapes the definition of the problem itself, and legitimizes particular responses while marginalizing others, establishing a self-reinforcing cycle of problematization and control.
The EU AI Act: A Framework for Conformity
The European Union’s AI Act is a comprehensive legal framework designed to regulate artificial intelligence technologies within its member states. It moves beyond broad ethical guidelines by establishing specific, legally enforceable obligations for developers and deployers of AI systems. This represents a shift from principles-based AI governance towards a rules-based system, with provisions covering areas such as data governance, transparency, human oversight, and accountability. The Act aims to foster innovation while mitigating potential risks associated with AI, and it is intended to set a global standard for AI regulation by establishing a clear legal basis for trustworthy AI.
The EU AI Act establishes a tiered system for regulating Artificial Intelligence based on the level of risk an AI system presents. This classification determines the obligations placed upon developers and deployers. ‘High-Risk AI Systems’ – those used in areas like critical infrastructure, education, employment, essential private and public services, law enforcement, migration, and administration of justice – are subject to particularly stringent requirements. These requirements include risk management systems, data governance protocols, technical documentation, transparency obligations, human oversight, and accuracy, robustness, and cybersecurity standards. Non-compliance with these regulations can result in substantial fines, up to 6% of annual global turnover or €30 million, whichever is higher.
Comparative analysis of AI governance regimes in the European Union, the United States, and China reveals fundamental structural divergences, resulting in a condition termed ‘structural incommensurability’. This indicates a lack of direct comparability or translatability between these regulatory approaches. Within the EU’s developing network of AI governance, the concept of ‘high_risk’ AI systems functions as a central organizing principle, or ‘problematization anchor’. Quantitative network analysis assigns ‘high_risk’ a Degree Centrality score of 0.367, indicating its relative importance in connecting and structuring the EU’s regulatory discourse and framework compared to other concepts.
Mapping the Landscape: Understanding AI Governance Through Network Analysis
Semantic Network Analysis provides a uniquely insightful approach to understanding the complex discussions surrounding artificial intelligence governance. This methodology moves beyond simple keyword counting, instead focusing on the relationships between concepts to reveal the underlying structure of debate. By representing ideas as nodes and their connections as links, analysts can visually map the conceptual landscape, identifying central themes, hidden assumptions, and potential areas of disagreement. The power of this technique lies in its ability to expose how different concepts – such as ‘innovation’, ‘ethics’, and ‘accountability’ – are interconnected, allowing for a more nuanced comprehension of the arguments driving policy and shaping the future of AI. Ultimately, it facilitates a deeper exploration of the values and priorities embedded within AI governance frameworks, offering a pathway to more informed and transparent decision-making.
The structure of debates surrounding artificial intelligence governance isn’t simply a matter of stated positions, but a complex web of interconnected concepts. Mapping these connections – for example, how ‘risk’ relates to ‘safety’ and ‘control’ – reveals the often-unacknowledged assumptions that shape the discourse. This process demonstrates that certain ideas are consistently linked, potentially reinforcing specific perspectives while obscuring others. By visualizing these conceptual relationships, researchers can identify underlying biases embedded within the language used to frame AI governance, exposing how particular definitions of ‘safety’ might prioritize certain values over others, or how a narrow understanding of ‘control’ could limit the exploration of beneficial AI applications. Ultimately, this network analysis offers a crucial tool for understanding not just what is being said about AI governance, but how it is being said, and the subtle ways in which assumptions become solidified as accepted truths.
Quantitative analysis of conceptual relationships within AI governance discourse is made possible through tools like Normalized Pointwise Mutual Information (NPMI). This statistical measure assesses the degree of association between concepts, providing a numerical representation of their co-occurrence. Recent application of NPMI to debates surrounding AI regulation in the European Union revealed a particularly strong connection between ‘safety’ and ‘health’, registering a value of 0.517. This suggests that, within the analyzed corpus of EU-related texts, discussions of AI safety are frequently and consistently linked to considerations of public health, potentially indicating a prevailing framing of AI risk through a health-focused lens and informing policy priorities in the region.
The Limits of Regulation: Incommensurability and Divergent Institutional Logics
The pursuit of effective artificial intelligence governance faces a core obstacle stemming from structural incommensurability – the presence of fundamentally disparate underlying assumptions about technology, society, and even the very definition of ‘intelligence’. This isn’t merely a disagreement over implementation details, but a conflict rooted in differing worldviews that render meaningful dialogue and consensus-building exceptionally difficult. One perspective might prioritize innovation and economic growth, viewing risks as acceptable trade-offs, while another emphasizes social justice and equity, demanding stringent safeguards against potential harms. These deeply held, often implicit, assumptions shape how ‘risk’ and ‘safety’ are conceptualized, influencing the design of governance frameworks and ultimately determining which values are prioritized – or inadvertently marginalized – in the development and deployment of AI systems. Consequently, attempts to impose universal regulations risk being ineffective, or even counterproductive, if they fail to acknowledge and address these foundational differences in perspective.
The challenge of aligning artificial intelligence with human values is complicated by the fact that different institutions operate according to distinct, often unstated, priorities – these are known as institutional logics. Consequently, interpretations of ‘risk’ and ‘safety’ become subjective and contested; what one organization deems acceptable, another may view as profoundly dangerous. For example, a technology company focused on innovation might prioritize rapid development, accepting a higher level of potential harm as the cost of progress, while a regulatory body tasked with public protection will naturally emphasize minimizing any potential negative consequence. This divergence isn’t simply a matter of disagreement, but stems from fundamentally different operating principles embedded within each institution’s structure and goals, creating inherent difficulties in establishing universally accepted standards for AI governance and responsible development.
Comparative network analysis of AI governance structures reveals distinct approaches between the United States and China. The US model demonstrates a distributed authority across various sectors, reflected in an Eigenvector Centrality score of 0.437, suggesting influence is dispersed rather than concentrated. Conversely, China’s governance network exhibits a stronger emphasis on centralized content control and direct technical intervention, yielding a higher Eigenvector Centrality of 0.484. This indicates a more cohesive and concentrated power structure focused on actively shaping the technological landscape. These differing centralities highlight fundamental discrepancies in how each nation approaches AI regulation, with the US prioritizing sectoral adaptation and China favoring centralized control and proactive technological management.
The study reveals that divergent understandings of ‘AI safety’ are not merely semantic differences, but reflect deeply ingrained institutional logics. This echoes Marvin Minsky’s observation: “You can’t expect intelligence to arise from chaos.” The research demonstrates how each region-the EU, US, and China-constructs its own framework for approaching AI governance, creating a system where modular approaches to safety lack cohesive context. If the system survives on duct tape, it’s probably overengineered, and this paper suggests that current attempts at global AI coordination risk precisely that – a fragile patchwork of approaches built upon fundamentally incommensurable foundations. The differing priorities and interpretations represent a challenge to meaningful international collaboration.
The Road Ahead
The analysis presented here reveals a landscape less of technical challenge and more of fundamentally disparate understandings. To speak of ‘AI safety’ as a unified problem is, it appears, to misunderstand the very architecture of the question. One does not simply replace a component; one must comprehend the entire circulatory system of institutional logic that sustains it. The divergences identified are not merely rhetorical; they are embedded in the procedural DNA of each region’s governance structures.
Future work must move beyond symptom-chasing – the endless calibration of algorithms – and focus on mapping these deeper, structural incompatibilities. Semantic network analysis offers a powerful lens, but it is, ultimately, a static snapshot. Longitudinal studies, tracing the evolution of these networks, are crucial. Equally important is a shift in perspective; examining not just what is said about AI risk, but how these discourses are enacted through policy, regulation, and funding mechanisms.
The prospect of global coordination remains, predictably, a complex undertaking. It is tempting to seek common ground, to build bridges across these diverging systems. However, a more honest approach may be to acknowledge the inherent tensions, to treat them not as obstacles to overcome, but as fundamental characteristics of the system itself. A truly robust governance framework may, paradoxically, require embracing a degree of structured incommensurability.
Original article: https://arxiv.org/pdf/2601.04107.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Tom Cruise? Harrison Ford? People Are Arguing About Which Actor Had The Best 7-Year Run, And I Can’t Decide Who’s Right
- Gold Rate Forecast
- Katanire’s Yae Miko Cosplay: Genshin Impact Masterpiece
- Brent Oil Forecast
- Adam Sandler Reveals What Would Have Happened If He Hadn’t Become a Comedian
- What If Karlach Had a Miss Piggy Meltdown?
- Silver Rate Forecast
- What are the Minecraft Far Lands & how to get there
- Answer to “Hard, chewy, sticky, sweet” question in Cookie Jam
- Arc Raiders Player Screaming For Help Gets Frantic Visit From Real-Life Neighbor
2026-01-08 12:39