
The Securitization of Artificial Intelligence: An Analysis of Its Drivers and Consequences✽
Mónica A. Ulloa Ruiz and Guillem Bas Graells
Received: November 29, 2024 | Accepted: May 5, 2025 | Revised: June 2, 2025
https://doi.org/10.7440/res93.2025.04
Abstract | This article examines the securitization of artificial intelligence (AI) in the United States, understood as the discursive framing of this technology as a security issue that justifies exceptional political and regulatory treatment. In a context of growing concern over the risks associated with the development of AI, the article identifies the drivers of securitization and evaluates its consequences for public policy-making. To this end, critical discourse analysis is applied to a corpus composed of twenty-five speech acts from government agencies, technical-scientific organizations, and the media. The analysis reveals the coexistence of two dominant discursive grammars: one threat-based, which emphasizes concrete adversaries and legitimizes extraordinary measures; and one risk-based, which promotes preventive responses embedded in traditional regulatory frameworks. The article also identifies a tension between national securitization approaches and macrosecuritization processes, where AI is portrayed as a global risk that threatens humanity. These dynamics—shaped by institutional, ideological, and geopolitical factors—lead to divergent and, at times, contradictory political responses. The study’s main contribution lies in offering an analytical framework that distinguishes between different logics of securitization and clarifies their impact on AI governance. It concludes that an approach based on risk management, rather than threat identification, enables more sustainable and cooperative long-term responses. It further argues that integrating AI into conventional policymaking—without routinely resorting to securitization—is essential for building effective regulatory frameworks aligned with current global challenges.
Keywords | Artificial Intelligence; existential risk; global catastrophic risk; national security; securitization
La securitización de la inteligencia artificial: un análisis de sus impulsores y sus consecuencias
Resumen | Este artículo examina el proceso de securitización de la inteligencia artificial (IA) en Estados Unidos, entendida como la configuración discursiva de esta tecnología como una cuestión de seguridad que justifica un tratamiento político y regulatorio excepcional. En un contexto de creciente preocupación por los riesgos asociados al desarrollo de la IA, en el artículo se propone identificar los impulsores que motivan su securitización y evaluar las consecuencias de dicho enfoque en la formulación de políticas públicas. Para ello, se usa la metodología del análisis crítico del discurso a un corpus compuesto por veinticinco actos de habla provenientes de agencias gubernamentales, organizaciones técnico-científicas y medios de comunicación. El análisis revela la coexistencia de dos gramáticas discursivas predominantes: una basada en amenazas, que enfatiza la presencia de adversarios concretos y legitima medidas extraordinarias, y otra basada en riesgos, que favorece respuestas preventivas integradas en marcos regulatorios tradicionales. Además, se identifica una tensión entre enfoques de securitización nacional y procesos de macrosecuritización, en los que la IA se trata como un riesgo global que amenaza a la humanidad. Estas dinámicas, influenciadas por factores institucionales, ideológicos y geopolíticos, generan respuestas políticas dispares y, en ocasiones, contradictorias. La principal contribución del estudio radica en ofrecer un marco analítico que permita distinguir entre diferentes lógicas de securitización y comprender sus efectos en la gobernanza de la IA. Se concluye que una aproximación basada en la gestión de riesgos, más que en la identificación de amenazas, favorece respuestas sostenibles y cooperativas en el largo plazo. Asimismo, se argumenta que la integración de la IA en la política convencional, sin recurrir sistemáticamente a su securitización, es esencial para construir marcos regulatorios eficaces y compatibles con los desafíos globales actuales.
Palabras clave | inteligencia artificial; riesgo catastrófico global; riesgo existencial; securitización; seguridad nacional
A securitização da inteligência artificial: uma análise de seus vetores e consequências
Resumo | Neste artigo, analisa-se o processo de securitização da inteligência artificial (IA) nos Estados Unidos, entendida como a construção discursiva dessa tecnologia enquanto questão de segurança que justifica um tratamento político e regulatório excepcional. Em um contexto de crescente preocupação com os riscos associados ao desenvolvimento da IA, o artigo tem como objetivo identificar os fatores que motivam sua securitização e avaliar as consequências de tal abordagem na formulação de políticas públicas. Para tanto, adota-se a metodologia de análise crítica do discurso a um corpus composto por 25 atos de fala de órgãos governamentais, organizações técnico-científicas e meios de comunicação. A análise revela a coexistência de duas principais gramáticas discursivas: uma baseada em ameaças, que enfatiza a presença de adversários concretos e legitima medidas extraordinárias; e outra baseada em riscos, que favorece respostas preventivas integradas aos quadros regulatórios tradicionais. Além disso, observa-se uma tensão entre as abordagens nacionais de securitização e os processos de macrossecuritização, nos quais a IA é tratada como um risco global que ameaça a humanidade. Essas dinâmicas, influenciadas por fatores institucionais, ideológicos e geopolíticos, geram respostas políticas díspares e, em alguns momentos, contraditórias. A principal contribuição do estudo está em oferecer uma estrutura analítica para distinguir entre diferentes lógicas de securitização e compreender seus efeitos na governança da IA. Conclui-se que uma abordagem baseada na gestão de riscos, e não na identificação de ameaças, favorece respostas sustentáveis e cooperativas no longo prazo. Argumenta-se também que a integração da IA na política convencional, sem recorrer sistematicamente à sua securitização, é essencial para construir quadros regulatórios eficazes e compatíveis com os desafios globais atuais.
Palavras-chave | inteligência artificial; risco catastrófico global; risco existencial; securitização; segurança nacional
Introduction
Securitization is the process by which an issue comes to be treated as a matter of security. This can happen through discourse, where a topic is framed in terms of security, or through practice, when an issue is effectively handled as a threat (Williams 2003). At its core, securitization involves what is known as a securitizing move—a political actor shapes a collective perception within a community, presenting a particular phenomenon as an existential threat to a valued referent object. This framing then serves to justify the call for urgent and exceptional measures (Buzan and Wæver 2003).
What sets this process apart is its rhetorical structure. Security discourse magnifies a problem and casts it as a top priority: once something is labeled a security issue, the actor making that claim gains the legitimacy to respond with extraordinary means (Buzan, Wæver, and de Wilde 1998). Securitization typically depends on three key elements: the perception of an existential threat, a sense of urgency, and a readiness to bypass established political or social norms in order to confront the threat (Buzan, Wæver, and de Wilde 1998). Crucially, securitizing an issue does not imply that it truly endangers a state’s survival—it means the portrayal of it as such has been persuasive. This framing can carry serious risks, as it may enable governments to justify authoritarian measures under the guise of protecting national security.
The securitization framework revolves around several key components: the referent object (the entity to be protected), the existential threat (a danger to that entity’s survival), extraordinary measures (actions that go beyond conventional politics), the securitizing actor (the one who frames the issue as a threat), and the audience (those who accept or reject the legitimacy of these measures). While national security—understood as the survival of the nation-state—has traditionally been the main referent object, the concept has since broadened to include everything from individuals to humanity as a whole (Sears 2023).
In recent years, securitization theory has expanded to address threats that go beyond the national level, such as global phenomena like cyberattacks (Aguilar Antonio 2025), climate change (Diez, von Lucke, and Wellmann 2016), and artificial intelligence (AI) (Aguilar Antonio 2024; Zeng 2021). This shift points to what is now called macrosecuritization—an approach where a global-level threat organizes and encompasses a range of mid- or lower-level securitizations around a central danger (Buzan and Wæver 2009). Due to its dual-use nature and disruptive potential across multiple sectors, AI has increasingly been framed as a national security issue in the United States. This framing has prompted institutions such as the Department of Homeland Security and the National Security Agency to become involved in its oversight. At the same time, the far-reaching implications of AI have led to its recognition as a transnational challenge—one that transcends the concerns of individual nation-states and requires a coordinated, multilateral response.
This dual framing of AI as both a national and a global threat creates tensions in its governance. While national securitization and macrosecuritization processes are not necessarily mutually exclusive—and can operate in tandem at various levels—the emphasis on national security often clashes with the demands of global security (Buzan and Wæver 2009). Although macrosecuritizations may encompass various unilateral national securitizations, competitive dynamics between states can undermine the international coordination needed to address the global risks posed by AI.
This article explores how the interplay and tension between these securitizing narratives shape perceptions of AI as a threat and influence how it is regulated. Using critical discourse analysis applied to twenty-five speech acts by government officials, technical experts, and media outlets, the study identifies the drivers behind this securitization and assesses its implications through two distinct logics: one based on threat and the other on risk (Diez, von Lucke, and Wellmann 2016).
The next section presents the methodology, based on a critical discourse analysis approach. This is followed by a presentation of the study’s findings, with a focus on the securitization logics identified and their discursive manifestations. The analysis then turns to the tensions between national securitization and macrosecuritization, examining the forces driving and challenging each approach. Finally, the discussion explores the implications of these dynamics for AI governance and proposes a potential policy framework that incorporates these risks into conventional regulatory structures.
Methodology
This study draws on critical discourse analysis (Van-Dijk 2016) to explore how language shapes social realities—in this case, the framing of AI as a security issue. Two main criteria guided the selection of speech acts. First, texts had to meet formal linguistic conditions that qualify them as legitimate within their communicative context—such as official documents or public statements issued through established institutional channels. Second, they had to originate from actors with the authority or influence to shape public discourse—those in positions of power or recognized expertise in their respective fields.
To reduce analytical bias, the material was drawn from a range of sources. The selected speech acts reflect the perspectives of three key groups. The first includes U.S. government institutions, whose discursive authority plays a major role in defining how AI is understood. The second comprises technical and scientific organizations based in the United States, whose expert status gives them considerable influence over public policy and public narratives. The third group consists of the print media, which—particularly in the age of digital communication—serves as a powerful vehicle for amplifying and legitimizing security-related discourses.
To select the relevant government reports, a systematic search was conducted on the official websites of executive departments such as the Department of State and the Department of Homeland Security. The aim was to identify documents that addressed artificial intelligence within the context of national security, technological risks, or emerging threats. Inclusion was based on the explicit presence of these themes, rather than on broad or interpretive readings. To minimize bias, the selection process avoided limiting itself to documents that supported a particular hypothesis. Instead, it also included texts presenting nuanced or even contradictory views on AI-related risks.
To identify statements from scientists and technical groups, the study involved a review of reports and public communications issued by scientific organizations and AI-focused think tanks. This included, for example, statements published by the Center for AI Safety (CAIS) during the International Dialogues on AI Safety (IDAIS). Documents from influential independent experts were also considered, such as Leopold Aschenbrenner’s 2024 essay “Situational Awareness: The Decade Ahead.” As for the press analysis, articles were selected from widely circulated and internationally credible media outlets with a strong presence in global news coverage. CNN and The New York Times were chosen as representative cases due to their broad reach, influence, and consistent coverage of technology and security issues. The selection was not limited to a specific branch of CNN but included its main online publications to ensure consistency in the type of discourse analyzed. While this approach may have excluded other media voices, it prioritized coherence in the corpus and comparability across discursive sources.
Textual analysis involved breaking down the discourse into specific categories and reconstructing meaning through contextual interpretation (Santander 2011). The analysis was carried out using Atlas.ti and employed a coding system that included thirty codes. The initial list of codes was developed deductively, based on the theoretical framework, and then refined through an inductive iteration in which new codes were added after a first round of text analysis. Each code’s frequency was quantified across the corpus, generating “groundedness” metrics that reflected how often each code appeared. A co-occurrence analysis was also performed to explore relationships between codes and identify patterns of association within the discourse. The data were processed using frequency tables, co-occurrence matrices, and semantic network visualizations. These quantitative results were then interpreted qualitatively, taking into account the institutional, social, and political contexts of the texts.
The findings were examined through the lens of securitization theory, linking observed discursive patterns to broader processes of social threat construction and institutional response. The analysis revealed that while some texts aligned with a threat-based securitization grammar—as described by Buzan, Wæver, and de Wilde (1998)—others fell into a different category. These did not invoke existential threats or call for exceptional measures, but still framed AI as a security issue distinct from conventional politics (Corry 2012). To clarify this second category, the analysis drew on a framework that distinguishes between threat-based securitization, risk-based securitization, and traditional politicization (Rhinard et al. 2024). In this model, securitization through threat constructs scenarios of direct harm involving existential dangers that justify extraordinary responses, whereas risk-based securitization addresses potential future harms, requiring precautionary measures. Both approaches differ from conventional politics, where issues are treated as governable and open to political compromise (see Table 1).
Table 1. Distinctions between securitization Frameworks
|
Category |
Threat-based securitization |
Risk-based securitization |
Politicization (Normal Politics) |
|
Grammar |
Constructs a scenario of direct harm (an existential threat) to a valued referent object |
Constructs conditions that could lead to harm (a risk) to a governable object |
Frames the object as governable (i.e., adaptable, measurable, subject to modification) |
|
Political Imperative |
Action plan aimed at defending against an external threat to the referent object |
Action plan aimed at enhancing governance and building resilience of the referent object |
Action plan aimed at maximizing utility through trade-offs with other goods |
|
Performative Effects |
Justifies exceptional measures (secrecy, unconstrained action) in the name of survival |
Justifies precautionary measures, including the introduction of safety margins |
Justifies trade-offs in relation to other goods |
Source: Author’s adaptation based on Rhinard et al. (2024).
To interpret the data through this framework, discursive indicators of the threat logic included explicit use of threat-security language and calls for action framed with a sense of urgency. In contrast, the risk logic was reflected in references to underlying causes of harm and to everyday, pervasive dangers, the use of risk-measure language, and mentions of more traditional regulatory models and approaches.
Results
The coding results highlight the distribution of the most prominent terms across the corpus. Table 2 summarizes the groundedness of each code, identifying the most frequently occurring concepts. This not only reflects how often each term appears, but also underscores its relative significance within the broader discursive context.
Besides grounding, co-occurrence patterns between codes were also examined to explore how different concepts relate within the securitization discourse. These relationships helped highlight dominant patterns and narrative priorities within the texts. Table 3 shows the frequency with which specific codes co-occur across the corpus.
Table 2. Code grounding in the analyzed corpus
|
Code |
Grounding |
|
Security |
1367 |
|
Risk |
1135 |
|
Control |
659 |
|
Militarization |
575 |
|
Regulation |
457 |
|
Protection |
388 |
|
Threat |
383 |
|
Impact |
293 |
|
Adversary |
255 |
Source: Authors based on the coding results.
Table 3. Co-occurrence of key codes in the analyzed corpus
|
Risk |
Threat |
Catastrophic |
Adversary |
Danger |
Harm |
Terrorism |
|
|
Security |
145 |
90 |
29 |
25 |
24 |
10 |
7 |
|
Control |
106 |
14 |
41 |
13 |
14 |
9 |
2 |
|
Regulation |
66 |
7 |
19 |
1 |
8 |
9 |
3 |
|
Militarization |
27 |
11 |
0 |
18 |
2 |
2 |
3 |
|
Protect |
27 |
20 |
2 |
5 |
4 |
11 |
4 |
|
Impact |
45 |
4 |
12 |
9 |
4 |
6 |
1 |
|
Benefit |
40 |
6 |
5 |
0 |
3 |
7 |
0 |
Source: Authors based on the coding results.
Building on this, the analysis focused on text fragments linked to the most significant co-occurrences, revealing two main insights. First, in the governmental sphere, the securitization of AI takes shape through two distinct discursive logics: one grounded in the language of threats, the other in the language of risk. This distinction is not coincidental, and stems from the institutional structure and ideological orientation of the actors articulating the discourse. Depending on their nature and mandate, government departments and agencies adopt specific narrative frameworks that reflect their particular approach to security.
Second, when examining the technical and scientific discourse of U.S.-based organizations, a dominant—though not universal—trend toward macrosecuritization becomes evident. In this framing, AI is portrayed as a potential threat to humanity as a whole, surpassing national boundaries and demanding globally coordinated responses. This contrasts with the more compartmentalized narratives found in the governmental sector and gives rise to specific tensions in the design of security policies, which will be explored in the sections that follow.
Risk-Based Securitization vs. Threat-Based Securitization in Government Discourse
The analysis revealed two distinct logics of securitization in the government discourse. The first, threat-based, is characterized by the identification of a clear antagonist, an emphasis on urgency, and the justification of exceptional measures. The second, risk-based, projects future scenarios involving diffuse and multifaceted dangers. It addresses less direct threats, often without a clearly defined other (Diez et al. 2016; Englund and Barquet 2023), and centers on the ongoing management of potential risks to prevent their materialization.
These two logics are reflected in contrasting narrative patterns. The threat-based pattern is marked by a high level of co-occurrence between codes such as “security,” “risk,” and “threat,” along with terms like “adversary” and “militarization.” This framing identifies a threatening other and issues urgent calls to action. In contrast, the risk-based pattern is dominated by terms such as “regulation” and “monitoring,” typically associated with the “risk” code. This logic points to underlying causes of harm and persistent, everyday dangers, aligning with a governance approach focused on prevention and the sustained development of institutional capacity.
It is worth noting that, beyond the threat- and risk-based grammars, two of the speech acts analyzed aligned more closely with a logic of politicization. In these cases, there was no securitizing grammar at play. One example is the most recent statement from the U.S. Department of State on AI (U.S. Department of State 2024), which makes no reference to immediate threats or future risks. The statement contains no mention of a threatening other, no militarized framing, and no identifiable referent object in need of protection.
At the governmental level, the type of discursive grammar used appears to depend on the actor delivering the speech act. Whether it comes from a department, a centralized agency, or a more decentralized structure, the governance framework seems to influence both how securitization emerges and what form it takes (Rhinard et al. 2024). This finding is consistent with earlier studies that identified the role of institutional arrangements in shaping securitization processes around issues like flood management and climate change (Elander, Granberg and Montin 2022; Wesselink et al. 2013). Our analysis suggests that these structural dynamics are equally relevant in shaping how AI-related security discourse is constructed.
In some of the documents issued by the executive branch, securitization is framed around second-order consequences. A clear example is the discourse found in texts from the White House, such as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Biden 2023), which reflects a predominantly risk-oriented narrative. In this document, the most grounded and frequently co-occurring codes—“risk” (42.24%) and “regulation” (20.32%)—point to a preventive approach aimed at managing future risks rather than portraying AI as an immediate threat.
However, a closer look at the text segments coded within these categories reveals a narrative that, while consistent with traditional governance frameworks, is presented as exceptional for two key reasons: (i) AI is not managed in the same way as other risks that may appear less severe, and (ii) the text explicitly references an impact on a referent object—in this case, national security:
Artificial Intelligence must be safe and secure […] It also requires addressing AI systems’ most pressing security risks—including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers—while navigating AI’s opacity and complexity. (Biden 2023)
In contrast, other documents analyzed—such as the report from the National Security Commission on Artificial Intelligence (NSCAI), a body created to advise the President and Congress—clearly reflect a threat-based securitization approach (NSCAI 2019). The discourse explicitly constructs a hostile other, most notably through 696 references to China. This securitizing stance is further evident in the relatively high frequency of codes such as “adversary” (5.5%), which are less prevalent in executive branch documents. These elements indicate a framing in which AI is positioned as a strategic asset in the context of geopolitical rivalry and national security.
To illustrate, we can compare the two previously mentioned texts: Biden’s Executive Order (2023) and the NSCAI report (2019). Although both are official documents, they originate from institutions with different structures and mandates, and they exhibit clearly divergent securitizing tendencies. Table 4 shows the relative density (as a percentage of total codings in each document) of the codes “militarization” and “regulation,” highlighting how these indicators align with threat- and risk-based logics respectively. In the NSCAI report, “militarization” accounts for 10.67% of all codings, compared to just 1.60% in the Executive Order. Conversely, “regulation” represents 20.32% of the codings in the Executive Order but only 3.34% in the NSCAI document.
Table 4. Density of key indicator codes in executive and advisory discourses
|
Code |
Executive Order (Biden 2023) |
NSCAI (2019) |
|
Militarization |
1.60% |
10.67% |
|
Regulation |
20.32% |
3.34% |
Source: Authors based on the coding results.
Thus, while the NSCAI—functioning as a national security commission—tends to frame AI in terms of threats and military or defense-oriented responses, the White House under the Biden administration adopts a more moderate stance. Through its executive order, it addresses AI-related challenges from a public policy and risk management perspective. This divergence may reflect how institutional structures and specific mandates shape each actor’s discourse and priorities around AI security.
Other texts in the corpus, such as the Artificial Intelligence and National Security report produced by the Congressional Research Service (Sayler 2020), also focus on national security, but reveal an even stronger emphasis on the categories of “militarization” (43.7%) and “adversary” (6.21%). This positions AI within a broader context of geopolitical rivalry, more explicitly than in the previously cited documents. Unlike those, this report shows a much higher co-occurrence of terms such as “threat” and “security” with “adversary” and “militarization,” indicating a greater focus on immediate threats.
These shifting discursive patterns suggest that the securitization of AI within the U.S. government does not follow a uniform trajectory. Instead, it varies significantly depending on the institutional actor. While federal executive departments tend to frame AI primarily in terms of exceptional risk management—often focused on second-order consequences—national security and defense bodies adopt a more overtly threat-based posture, in line with their institutional missions.
An additional hypothesis could be that the distinction between risk- and threat-based discourses is not only shaped by institutional structures but also by the ideological and political orientation of the sitting administration. In this regard, documents that reflect a threat-oriented logic were mostly produced during the Trump administration (2017–2021), whereas texts aligned with a risk-centered approach have been more prevalent under the Biden administration (2021–2025).
Supporting this argument, the executive order Maintaining American Leadership in Artificial Intelligence (Trump 2019) highlights in Section 8, Action Plan for Protection of the United States Advantage in AI Technologies, a marked concern with national security and the geopolitical edge linked to AI. However, this observation remains inconclusive, as the limited number of such references makes it difficult to conduct a comprehensive comparative analysis of code density across documents. In fact, other executive orders issued during Trump’s first term—such as Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (Trump 2020)—while mentioning the use of AI for national security and defense purposes, tend to frame these concerns more in terms of second-order consequences.
National Securitization vs. Macrosecuritization in Technical Discourse
The analysis revealed that most technical and scientific discourses produced by U.S.-based organizations and experts tend to align with a process of macrosecuritization. In these narratives, humanity as a whole is framed as the primary referent object in need of protection from the existential threat posed by AI. As a result, the responses called for are extraordinary in scope and go beyond more localized securitization efforts, such as those centered on the nation-state.
From this perspective, the statements issued in Venice, Beijing, and Oxford by the IDAIS-affiliated scientific organizations (2023, 2024a, 2024b) clearly fall within a macrosecuritization framework. These texts frequently employ terms such as “humanity,” “global,” “international,” and “states” (in the plural), pointing to the need for multilateral actors to confront a threat that extends beyond national borders. Additionally, the recurring use of “catastrophic” to describe risk reflects a higher order of urgency—one not observed in domestic governmental documents (see Figure 1).
Figure 1. Word cloud based on the three public statements issued by IDAIS

Source: Authors based on the three public statements issued by IDAIS (2023, 2024a, and 2024b).
The IDAIS statements analyzed focus on governance and reflect a risk-based macrosecuritization approach. They aim to integrate AI management into governmental agendas without invoking extraordinary measures, while still conveying a sense of urgency and identifying an existential risk to the referent object—namely, humanity:
Unsafe development, deployment, or use of AI systems may pose catastrophic or even existential risks to humanity within our lifetimes. […] The combination of concerted technical research efforts with a prudent international governance regime could mitigate most of the risks from AI, enabling the many potential benefits. (IDAIS 2024a)
Meanwhile, other brief statements—such as the widely circulated CAIS (2023) declaration—contain enough elements to be categorized as threat-based macrosecuritization texts. The statement frames artificial intelligence as an existential threat to humanity (extinction), calls for urgent action, and goes beyond conventional risk management measures. It aligns AI with other scenarios that have historically warranted exceptional governance responses, such as the threat of nuclear war: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” (CAIS 2023).
Even news articles that oppose framing AI as a catastrophic or existential risk still acknowledge that AI poses global risks—albeit of a lesser magnitude—that warrant decisive action. These pieces also portray humanity as the threatened referent object (Science Media Centre 2023): “It is, therefore, imperative that the public and policymakers understand this distinction to devise appropriate regulations that address the real risks of these applications and not fictional ones” (Carusso 2024).
In contrast to the prevailing trend toward macrosecuritization, some documents from the technical-scientific sector diverge from this approach and remain within the framework of national threat-based securitization. One example is Leopold Aschenbrenner’s 2024 essay “Situational Awareness: The Decade Ahead” (2024)1, which, while identifying the “free world” as the object of protection, casts the United States—as its representative—as under threat, particularly from rivals such as China. Along similar lines, Project 2025, a political agenda compiled by over a hundred conservative think tanks, echoes this perspective, calling for an end to “Stop aiding the CCP’s authoritarian approach to artificial intelligence” (Dans and Groves 2023). This tension between national and macro-level securitization, found in both governmental and scientific actors, raises governance challenges for AI that will be explored in later sections.
Discussion
What Enables Securitization Processes?
As shown in the previous sections, the securitization of AI is shaped by a combination of interrelated factors that influence both the character and extent of institutional responses. What follows outlines the distinctive patterns associated with each type of securitization process—though some drivers cut across multiple categories.
Threat-Based Securitization
Traditional securitization reflects the enduring dominance of frameworks in which states—and their defense and security institutions—play a central role in defining and responding to threats. This approach is driven by two key factors. First, the perception of geopolitical competition shapes AI-related concerns within narratives of strategic rivalry and immediate threats. This is particularly evident in ideological framings such as the arms race, where AI is seen as a pivotal factor in the global balance of power. Second, the institutional structures of military and defense bodies tend to prioritize national security frameworks over global considerations.
Risk-Based Securitization
Risk-based securitization marks a shift toward administrative management, where bureaucratic structures transform exceptional threats into matters of routine regulation. This process is driven by three main factors: (i) administrative institutions reinterpret extraordinary concerns as issues of day-to-day management; (ii) ideologies rooted in administrative governance recast AI not as an existential threat, but as a subject of policy oversight; and (iii) pre-existing regulatory instruments facilitate the incorporation of new concerns into established governance frameworks.
Macrosecuritization
Macrosecuritization arises in response to the transnational nature of AI-related risks, creating frameworks for assessment and action that extend beyond national borders. Its defining characteristics include the perception of global risk, which underpins technical- scientific narratives about multilateral threats; the active role of securitizing actors, particularly the scientific and technical community, who use their expertise to engage functional actors such as national governments and multilateral organizations; and an ideological framework that places AI within a broader category of global threats.
The coexistence of multiple securitization processes leads to complex dynamics, where some drivers operate simultaneously across different levels. Existing institutional frameworks shape how emerging concerns are translated into concrete policies, and this multiplicity of drivers generates structural tensions within the field of AI governance.
A first point of tension arises between national security and global security—that is, between securitization and macrosecuritization. In this context, sovereignty imperatives often conflict with the need for international coordination in the face of shared risks. A second tension emerges between administrative management and exceptionality, reflected in the competition between bureaucratic approaches aimed at institutionalizing risk governance and strategies that prioritize urgent and extraordinary action.
Lastly, a tension emerges between technical expertise and political processes. This is evident in the contest between the scientific community and traditional political actors over epistemic authority. The struggle is also closely linked to the earlier tension between macrosecuritization and national securitization, as each tends to draw on different sources of legitimacy. Taken together, these tensions help explain the coexistence of multiple securitization processes and underscore the governance challenges that lie ahead for AI. These challenges will be explored in the next section.
Implications for Governance
Macrosecuritization vs. National Securitization
A central point of tension in AI governance concerns the relationship between macrosecuritization and national securitization. As Sears (2023) observes, when narratives focused on the securitization of humanity take precedence, there may be space for major powers to reach consensus and support macrosecuritization. In contrast, when national securitization dominates, rivalries between major powers tend to push them toward prioritizing national power and security over the survival and safety of humanity as a whole. According to Sears (2023), this tendency is the main reason past efforts to macrosecuritize global challenges have failed. This trend may stem from what Buzan and Wæver (2009) argue: that mid-scale collectives—such as states or civilizations—tend to be more persistent referent objects than broader collectives like humanity. This may be because these mid-scale entities face a more clearly defined other, which in turn strengthens their internal sense of identity.
In the following paragraphs, we assess how specific AI-related threats give rise to different interactions between the middle level (national securitization) and the system level (macrosecuritization). Drawing on the distinction proposed by Buzan and Wæver (2009), we argue that different types of threats lead to either positive or negative linkages between these levels of securitization—depending on whether actors share a definition of the threat and a referent object, or whether they frame each other as threats. In practice, this distinction is not always clear-cut. In some instances, actions taken by one actor to protect itself from a perceived threat posed by another may inadvertently serve to protect a broader set of actors from a common danger.
In cases of positive linkage, when a nation-state adopts a securitization framework to address a shared threat, the measures it implements may unintentionally enhance the security of other states as well. For example, a state might identify the risk of AI being used to facilitate the development of biological weapons as a national threat. In response, it may require AI developers within its jurisdiction to implement safeguards that limit model capabilities in biology or tighten screening procedures for dual-use DNA sequences. Although the primary concern here is national security, such measures could also improve global biosecurity. In fact, multiple states might come to share a common motivation—such as preserving their monopoly on the use of force (Bull 1977)—in response to the prospect of AI empowering non-state actors or becoming increasingly difficult to control.
From this perspective, national securitization can serve not only as a means to address global risks but also as a potential stepping stone toward macrosecuritization. This is because activating and sustaining a state apparatus tends to be a more feasible first step—both logistically and rhetorically—than launching multilateral initiatives. A relevant example is how states initially dismantled their biological weapons programs based on national security calculations, and only later moved to negotiate the Biological Weapons Convention. In a similar vein, it may be reasonable for states to first tackle shared risks independently, before seeking collaborative solutions to those aspects of the threat that cannot be adequately addressed through unilateral action.
It is worth noting that there are instances in which national securitization and macrosecuritization of AI come into direct conflict. For example, states that pursue national securitization are likely to militarize the technology, as a comparative military advantage could be especially transformative on the international stage (Horowitz et al. 2020). National securitization may also worsen dangerous competitive dynamics by encouraging states to invest increasingly large resources in the AI development race. These competitive pressures could lead states to act with less caution and make it more difficult to reach international agreements (Gruetzemacher et al. 2024). Taken to the extreme, this could also result in conflict between major powers. For instance, if one country is close to developing a transformative AI system, other states might be incentivized to launch a preemptive strike on that country’s data centers to prevent it from gaining an insurmountable advantage.
In other cases, even if a zero-sum dynamic exists between two actors, the outcome of a measure stemming from national securitization can have positive effects for humanity as a whole. For instance, the U.S. government could require leading AI companies to strengthen their cybersecurity practices in order to protect the weights of their most advanced AI models, with the goal of preventing China from copying them. Although such a policy may be driven by competition with a direct adversary, it could produce a broader effect: curbing the proliferation of cutting-edge AI models and reducing the likelihood that malicious non-state actors gain access to potentially dangerous capabilities—or that AI progress accelerates uncontrollably. Returning to the earlier analogy of biological weapons, the United States discontinued its program to reduce the risk of proliferation to its adversaries, but the result was a world with a lower absolute risk of biological attacks.
Ultimately, the impact of securitization depends less on the chosen referent object than on how the threat is framed and the specific protective measures implemented. The predominance of one threat over another will depend on factors such as the perceived risks associated with AI or hostility toward political adversaries. Both factors, in turn, will interact with the expected competitive advantage that transformative AI might offer to those who control it. The relative weight of each of these factors will shape the policies adopted by states. If AI-related dangers are framed as serious threats to national security, states may implement measures that yield significant positive externalities and open up opportunities for cooperation—even if driven initially by narrowly defined self-interest. This is because the most prominent global threats identified—such as the use of AI to conduct cyberattacks, to develop biological weapons, or the loss of control over advanced AI systems—also inherently affect individual states. Conversely, if states place greater emphasis on defeating political rivals in the race for transformative AI, national securitization may ultimately undermine the emergence of a macrosecuritization framework capable of safeguarding humanity.
In any case, existing frameworks of human and national security show significant limitations when it comes to addressing threats such as AI. The absence of a global political authority capable of acting on behalf of humanity means that macrosecuritization relies on consensus among major powers—a consensus that is constantly challenged by conflicting narratives of national versus global securitization. This process must navigate tensions between technical expertise and democratic participation, global consensus and national sovereignty, and the universality of risk versus the specificity of its impacts.
Securitization and Politicization
Beyond the tension between national securitization and macrosecuritization, disputes also arise around the securitizing move itself. Regardless of which referent object is being protected, securitization implies going beyond the logic of “normal” politics and legitimizing extraordinary measures. In this sense, securitization is likely to shift the Overton window—the range of policies considered acceptable by public opinion—toward the acceptance of more radical actions. Allen and Chan (2017) argue that the implications of AI for national security will be revolutionary and, as a result, governments will consider extraordinary policy measures, similar to those considered during the early decades of nuclear weapons. Likewise, Leung (2019) predicts that AI developers will increasingly face legislative restrictions imposed by states, particularly in the form of policies such as export controls motivated by national security concerns.
In any case, securitization legitimizes a broad range of potential policy responses. In the United States, growing awareness that AI may pose serious national security risks has led to increased involvement from the Department of Homeland Security, particularly through subdivisions such as the Office for Countering Weapons of Mass Destruction and the Cybersecurity and Infrastructure Security Agency. Taking this a step further, the 2024 report of the U.S.-China Economic and Security Review Commission—which advises Congress—recommended launching a program modeled on the Manhattan Project to accelerate the development of artificial general intelligence (U.S.-China Economic and Security Review Commission 2024). At the more extreme end of the spectrum, in a scenario where securitization becomes deeply rooted, a government might even implement mass surveillance systems aimed at detecting and preventing the development of technologies with destructive capabilities (Bostrom 2019). Given how securitization can concentrate resources and attention in a particular direction, it becomes crucial for policymakers to weigh the potential downsides of their actions. As Wæver (1993) warns, there is always a risk that states may exploit securitization to unjustifiably expand their own power.
Finally, it is worth noting that many of the policies being proposed to govern AI don’t necessarily require a process of securitization. For instance, the Executive Order issued under the Biden administration (2023) mandates that developers of AI models exceeding a certain threshold must report their training processes and evaluation results. These kinds of requirements can easily fall within the bounds of “normal politics” and are in fact standard practice in other non-securitized sectors, such as the pharmaceutical or aviation industries. However, the establishment of a framework where all AI systems above a certain capability must be registered, monitored, and controlled by the government—or by an intergovernmental agency—would likely require a prior securitization of AI.
Threat-Based Securitization or Risk-Based Securitization?
This analysis prompts reflection on the implications of adopting a “second-order security policy”—a form of securitization centered on risk management. Unlike traditional approaches that respond to concrete, immediate threats, risk-based securitization recognizes that some risks cannot be entirely eliminated and instead must be managed on an ongoing basis. This shift signals a move away from emergency-driven, exceptional measures toward a more permanent and long-term planning approach, which leads to three key changes in governance: (i) the decoupling of security from existential threat, allowing for greater flexibility in risk management; (ii) the displacement of security as the central focus, enabling a broader and more integrated approach that includes social and economic dimensions; and (iii) the replacement of emergency criteria with long-term strategies of social engineering (Corry 2012).
Both approaches to securitization—threat-based or risk-based—have their drawbacks. As has been shown, threat-based securitizations often politicize security, allowing states to justify extraordinary measures and bypass regular decision-making processes (Buzan, Wæver, and de Wilde 1998). This can result in the allocation of resources toward specific strategic interests at the expense of other governmental priorities. Furthermore, the logic of threats fosters a constant crisis narrative that legitimizes state intervention and reinforces surveillance and control structures. In turn, risk-based securitizations generate a diffuse and ongoing sense of insecurity that permeates both public and private life. This approach can normalize risk, making insecurity appear as a permanent condition, and promote self-surveillance practices, as seen in other securitization processes (Aradau and Van Munster 2007).
When considering these implications, we believe that effective governance should strike a balance between national security imperatives and the development of regulatory frameworks aligned with traditional policymaking. It should also take into account the global risks associated with AI, which—due to their transnational nature—demand a coordinated multilateral response. One possible solution would be to incorporate these risks into conventional policy processes, fostering the integration of AI within existing governance structures. Over the long term, this could enable a shift from a state of risk-based securitizations toward one of politicization.
Nonetheless, this approach entails significant challenges, particularly in the context of geopolitical competition and power asymmetries. For a second-order security policy to be effective, it will be necessary to build trust among international actors, manage risks in a transparent and standardized manner, and adapt regulatory frameworks to keep pace with the rapid development of this technology—without resorting to exceptional measures. In doing so, AI governance could help shift securitizing narratives away from threat-based frameworks toward risk-based ones. These are more compatible with multilateral interests, as they acknowledge the seriousness of the risks without fueling a permanent state of urgency or exception. Ultimately, this would allow such concerns to be integrated into conventional policymaking and support the formation of global agreements.
Conclusions
This article examines how AI securitization is taking shape in the United States through two distinct logics: one grounded in the perception of direct threats, and another focused on long-term, precautionary risk management through regulation. Institutional structures and political orientation play a key role in determining which approach is favored. While some agencies frame AI as an immediate threat—often tied to geopolitical adversaries—others aim to incorporate AI-related risks into conventional regulatory frameworks, thereby expanding the boundaries of what is considered “normal politics.”
The predominance of national security as the primary framework for addressing AI raises specific risks. As highlighted in the literature on macrosecuritization, the race among major powers for AI leadership can intensify international tensions and hinder cooperation on global AI governance. This context suggests that, while national securitization may enable swift and effective action in specific settings, it can also constrain the ability to forge global agreements and to address shared risks in a coordinated manner. In contrast, a macrosecuritization approach centered on a broader referent object—such as humanity—could enable a more collaborative response. However, its effectiveness ultimately depends on consensus and the willingness of major powers to cooperate.
This study suggests the potential for a hybrid approach to AI governance. Integrating AI-related risks into conventional politics and risk management frameworks could support long-term oversight without relying solely on threat-based securitization, thus avoiding the negative consequences of diverting resources from other social priorities. However, this approach presents a challenge for policymakers, who must balance the urgency of action required by this technology with the development of a regulatory framework that ensures long-term safety.
References
✽ This research was supported by the Observatory of Global Catastrophic Risks (ORCG), United States. Both authors contributed to all stages of the article’s development: Guillem Bas Graells was primarily responsible for building the theoretical framework, interpreting the findings, and drafting the discussion sections; Mónica A. Ulloa Ruiz led the systematization of the discourse analysis, the methodological design, corpus processing, and drafting of the results section. The conclusion was co-written and reflects the integrated contributions of both authors. The overall design of the article and the critical review of the manuscript were carried out collaboratively. The authors would like to thank Ivanna Alvarado, Roberto Tinoco, Jaime Sevilla, and Gideon Futerman for their helpful comments and for the discussions held around various versions of the article. All remaining errors are the sole responsibility of the authors.
1 Leopold Aschenbrenner, included in this section as an independent expert, is also the founder of an AI investment firm and a former OpenAI employee. As such, his views may be influenced by his professional interests and previous experience in the sector.
BA in Anthropology and Master in Social Studies of Science from Universidad Nacional de Colombia. Policy Transfer Lead at the Observatory of Global Catastrophic Risks (ORCG), Baltimore, United States. https://orcid.org/0000-0002-7878-041X | mulloar@orcg.info
Specialization in Artificial Intelligence Governance and MA in Security and Technology from the National School of Political and Administrative Studies (SNSPA), Romania. Artificial Intelligence Coordinator at the Observatory of Global Catastrophic Risks (ORCG), Baltimore, United States. https://orcid.org/0009-0003-3541-2208