The Securitization of Artificial Intelligence: An Analysis of its Drivers and Consequences
No. 93 (2025-07-25)Author(s)
-
Mónica A. Ulloa RuizObservatory of Global Catastrophic Risks, Baltimore, United StatesORCID iD: https://orcid.org/0000-0002-7878-041X
-
Guillem Bas GraellsObservatory of Global Catastrophic Risks, Baltimore, United StatesORCID iD: https://orcid.org/0009-0003-3541-2208
Abstract
This article examines the securitization of artificial intelligence (AI) in the United States, understood as the discursive framing of this technology as a security issue that justifies exceptional political and regulatory treatment. In a context of growing concern over the risks associated with the development of AI, the article identifies the drivers of securitization and evaluate its consequences for public policy-making. To this end, it applies critical discourse analysis to a corpus composed of twenty-five speech acts from government agencies, technical-scientific organizations, and the media. The analysis reveals the coexistence of two dominant discursive grammars: one threat-based, which emphasizes concrete adversaries and legitimizes extraordinary measures; and one risk-based, which promotes preventive responses embedded in traditional regulatory frameworks. The article also identifies a tension between national securitization approaches and macrosecuritization processes, where AI is portrayed as a global risk that threatens humanity. These dynamics—shaped by institutional, ideological, and geopolitical factors—lead to divergent and, at times, contradictory political responses. The study’s main contribution lies in offering an analytical framework that distinguishes between different logics of securitization and clarifies their impact on AI governance. It concludes that an approach based on risk management, rather than threat identification, enables more sustainable and cooperative long-term responses. It further argues that integrating AI into conventional policymaking—without routinely resorting to securitization—is essential for building effective regulatory frameworks aligned with current global challenges.
References
Aguilar Antonio, Juan Manuel. 2024. “Trayectoria y modelo de gobernanza de las políticas de inteligencia artificial (IA) de los países de América del Norte”. Justicia 29 (45). https://doi.org/10.17081/just.29.45.7162
Aguilar Antonio, Juan Manuel. 2025. Tech Leap or Tech Lag: Latin America’s Quest to Keep up with Emerging Technologies. Miami: Steven J. Green School of International & Public Affairs / Florida International University.
Allen, Greg y Taniel Chan. 2017. Artificial Intelligence and National Security. Cambridge, MA: Belfer Center for Science and International Affairs / Harvard Kennedy School.
Aradau, Claudia y Rens Van Munster. 2007. “Governing Terrorism Through Risk: Taking Precautions, (Un)Knowing the Future”. European Journal of International Relations 13 (1): 89-115. https://doi.org/10.1177/1354066107074290
Aschenbrenner, Leopold. 2024. “Situational Awareness: The Decade Ahead”. Situational Awareness, junio. https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf
Biden, Joseph R. 2023. “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. The White House, 30 de octubre. https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Bostrom, Nick. 2019. “The Vulnerable World Hypothesis”. Global Policy 10 (4): 455-476. https://doi.org/10.1111/1758-5899.12718
Bull, Hedley, ed. 1977. “How is Order Maintained in World Politics?” En The Anarchical Society: A Study of Order in World Politics, 51-73. Londres: Palgrave Macmillan.
Buzan, Barry y Ole Wæve. 2003. Regions and Powers: The Structure of International Security. Cambridge: Cambridge University Press.
Buzan, Barry y Ole Wæver. 2009. “Macrosecuritisation and Security Constellations: Reconsidering Scale in Securitisation Theory”. Review of International Studies 35 (2): 253-276. https://doi.org/10.1017/S0260210509008511
Buzan, Barry, Ole Wæver y Jaap de Wilde. 1998. Security: A New Framework for Analysis. Boulder: Lynne Rienner Publishers.
CAIS (Centre for AI Safety). 2023. “Mitigating the Risk of Extinction from AI Should Be a Global Priority Alongside Other Societal-Scale Risks Such as Pandemics and Nuclear War”. CAIS, s.f. https://www.safe.ai/work/statement-on-ai-risk
Carusso, Jeff. 2024. “Drink the Kool-Aid All You Want, but Don’t Call AI an Existential Threat”. Bulletin of the Atomic Scientists, 29 de abril. https://thebulletin.org/2024/04/drink-the-kool-aid-all-you-want-but-dont-call-ai-an-existential-threat/
Corry, Olaf. 2012. “Securitisation and ‘Riskification’: Second-order Security and the Politics of Climate Change”. Millennium 40 (2): 235-258. https://doi.org/10.1177/0305829811419444
Dans, Paul y Steven Groves. 2023. Mandate for Leadership: The Conservative Promise. Nueva York: The Heritage Foundation.
Diez, Thomas, Franziskus von Lucke y Zehra Wellmann. 2016. The Securitisation of Climate Change: Actors, Processes and Consequences. Londres: Routledge.
Englund, Mathilda y Karina Barquet. 2023. “Threatification, Riskification, or Normal Politics? A Review of Swedish Climate Adaptation Policy 2005–2022”. Climate Risk Management 40: 100492. https://doi.org/10.1016/j.crm.2023.100492
Elander, Ingemar, Mikael Granberg y Stig Montin. 2022. “Governance and Planning in a ‘Perfect Storm’: Securitising Climate Change, Migration and Covid-19 in Sweden”. Progress in Planning 164: 100634. https://doi.org/10.1016/j.progress.2021.100634
Gruetzemacher, Ross, Shahar Avin, James Fox y Alexander K. Saeri. 2024. “Strategic Insights from Simulation Gaming of AI Race Dynamics”. Computers and Society. https://doi.org/10.48550/arXiv.2410.03092
Horowitz, Michael, Lauren Kahn, Christian Ruhl, Mary Cummings, Erik Lin-Greenberg, Paul Scharre, et al. 2020. “Policy Roundtable: Artificial Intelligence and International Security”. Texas National Security Review, 2 de junio. https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/
IDAIS (International Dialogues on AI Safety). 2023. “The International Dialogues on AI Safety-Oxford Statement”. IDAIS, 31 de octubre. https://idais.ai/dialogue/idais-oxford/
IDAIS (International Dialogues on AI Safety). 2024a. “The International Dialogues on AI Safety-Beijing Statement”. IDAIS, 11 de marzo. https://idais.ai/dialogue/idais-beijing/
IDAIS (International Dialogues on AI Safety). 2024b. “The International Dialogues on AI Safety-Venice Statement”. IDAIS, 8 de septiembre. https://idais.ai/dialogue/idais-venice/
Leung, Jade. 2019. “Who Will Govern Artificial Intelligence? Learning From the History of Strategic Politics in Emerging Technologies”. Disertación doctoral, University of Oxford. https://ora.ox.ac.uk/objects/uuid:ea3c7cb8-2464-45f1-a47c-c7b568f27665
NSCAI (National Security Commission on Artificial Intelligence). 2019. National Security Commission on Artificial Intelligence: Interim Report. November 2019. https://digital.library.unt.edu/ark:/67531/metadc1851191/
Rhinard, Mark, Claudia Morsut, Elisabeth Angell, Simon Neby, Mathilda Englund, Karina Barquet et al. 2024. “Understanding Variation in National Climate Change Adaptation: Securitization in Focus”. Environment and Planning C: Politics and Space 42 (4): 676-696. https://doi.org/10.1177/23996544231212730
Santander, Pedro. 2011. “Por qué y cómo hacer análisis de discurso”. Cinta de moebio 41: 207-224. https://doi.org/10.4067/S0717-554X2011000200006
Sayler, Kelley M. 2020. “Artificial Intelligence and National Security”. Congressional Research Service 45178, 10 de noviembre. https://nsarchive.gwu.edu/document/27080-document-226-congressional-research-service-kelley-m-sayler-artificial-intelligence
Science Media Centre. 2023. “Expert Reaction to a Statement on the Existential Threat of AI Published on the Centre for AI Safety Website”. Science Media Centre, 30 de mayo. https://www.sciencemediacentre.org/expert-reaction-to-a-statement-on-the-existential-threat-of-ai-published-on-the-centre-for-ai-safety-website/
Sears, Nathan Alexander. 2023. “Great Power Rivalry and Macrosecuritization Failure: Why States Fail to ‘Securitize’ Existential Threats to Humanity”. Disertación doctoral, University of Toronto. http://hdl.handle.net/1807/126870
Trump Donald. 2019. “Executive Order on Maintaining American Leadership in Artificial Intelligence”. The White House, 11 de febrero. https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/
Trump, Donald. 2020. “Executive Order 13960 of December 3, 2020. Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government”. Federal Register, 8 de diciembre. https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government
U.S. Department of State. 2024. “Secretary Antony J. Blinken at the Advancing Sustainable Development Through Safe, Secure, and Trustworthy AI Event”. U.S. Department of State, 23 de septiembre. https://2021-2025.state.gov/secretary-antony-j-blinken-at-the-advancing-sustainable-development-through-safe-secure-and-trustworthy-ai-event/
U.S.-China Economic and Security Review Commission. 2024. 2024 Annual Report to Congress. U.S.-China Economic and Security Review Commission. https://www.uscc.gov/annual-report/2024-annual-report-congress
Van-Dijk, Teun A. 2016. “Análisis crítico del discurso”. Revista Austral de Ciencias Sociales 30: 203-222. https://doi.org/10.4206/rev.austral.cienc.soc.2016.n30-10
Waever, Ole. Securitization and Desecuritization. Working Papers, vol. 5. Copenhagen: Centre for Peace and Conflict Research, 1993.
Wesselink, Anna, Karen S. Buchanan, Yola Georgiadou y Esther Turnhout. 2013. “Technical Knowledge, Discursive Spaces and Politics at the Science-policy Interface”. Environmental Science & Policy 30: 1-9. https://doi.org/10.1016/j.envsci.2012.12.008
Williams, Michael C. 2003. “Words, Images, Enemies: Securitization and International Politics”. International Studies Quarterly 47 (4): 511-531. https://doi.org/10.1046/j.0020-8833.2003.00277.x
Zeng, Jinghan. 2021. “Securitization of Artificial Intelligence in China”. The Chinese Journal of International Politics 14 (3): 417-445. https://doi.org/10.1093/cjip/poab005
License
Copyright (c) 2025 Mónica A. Ulloa Ruiz, Guillem Bas Graells

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.