June 11, 2024

Catalyzing Crisis

A Primer on Artificial Intelligence, Catastrophes, and National Security

Executive Summary

The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (AI). In response, several AI labs, national governments, and international bodies have launched new research and policy efforts to mitigate large-scale AI risks. However, growing efforts to mitigate these risks have also produced a divisive and often confusing debate about how to define, distinguish, and prioritize severe AI hazards. This categorical confusion could complicate policymakers’ efforts to discern the unique features and national security implications of the threats AI poses—and hinder efforts to address them. Specifically, emerging catastrophic risks with weighty national security implications are often overlooked between the two dominant discussions about AI concern in public discourse: present-day systemic harms from AI related to bias and discrimination on the one hand, and cantankerous, future-oriented debates about existential risks from AI on the other.

This report aims to:

Demonstrate the growing importance of mitigating AI’s catastrophic risks for national security practitioners


Clarify what AI’s catastrophic risks are (and are not)


Introduce the dimensions of AI safety that will most shape catastrophic risks



Catastrophic AI risks, like all catastrophic risks, demand attention from the national security community as a critical threat to the nation’s health, security, and economy. In scientifically advanced societies like the United States, powerful technologies can pose outsized risks for catastrophes, especially in cases such as AI, where the technology is novel, fast-moving, and relatively untested. Given the wide range of potential applications for AI, including in biosecurity, military systems, and other high-risk domains, prudence demands proactive efforts to distinguish, prioritize, and mitigate risks. Indeed, past incidents related to finance, biological and chemical weapons, cybersecurity, and nuclear command and control all hint at possible AI-related catastrophes in the future, including AI-accelerated biological weapons of mass destruction (WMD) production, financial meltdowns from AI trading, or even accidental weapons exchanges from AI-enabled command and control systems. In addition to helping initiate crises, AI tools can also erode states’ abilities to cope with them by degrading their public information ecosystems, potentially making catastrophes more likely and their effects more severe.

Perhaps the most confusing aspect of public discourse about AI risks is the inconsistent and sometimes interchangeable use of the terms “catastrophic risks” and “existential risks”—the latter often provoking strong disagreements among experts. To disentangle these concepts, it is helpful to consider different crises along a spectrum of magnitude, in which the relative ability of a state to respond to a crisis determines its classification. By this definition, a catastrophic event is one that requires the highest levels of state response, with effects that are initially unmanageable or mismanaged—causing large-scale losses of life or economic vitality. Existential risks are even larger in magnitude, threatening to overwhelm all states’ ability to respond, resulting in the irreversible collapse of human civilization or the extinction of humanity. Both differ from smaller-scale crises, such as emergencies and disasters, which initiate local and regional state crisis management responses, respectively. While the prospect of existential risks unsurprisingly provokes pitched disagreements and significant media attention, catastrophic risks are of nearer-term relevance, especially to national security professionals. Not only are catastrophic risks less speculative, but the capabilities that could enable AI catastrophes are also closer to development than those that would be of concern for existential risks. Catastrophic AI risks are also, in many cases, variants on issues that the U.S. government has already identified as high priorities for national security, including possibilities of nuclear escalation, biological attacks, or financial meltdowns.

Despite recent public alarm concerning the catastrophic risks of powerful “deep learning”–based AI tools in particular, the technology’s integration into high-risk domains is largely still in its nascent forms, giving the U.S. government and industry the opportunity to help develop the technology with risk mitigation in mind. But accurately predicting the full range of the most likely AI catastrophes and their impacts is challenging for several reasons, particularly as emerging risks will depend on the ways in which AI tools are integrated into high-impact domains with the potential to disrupt society. Instead, this report distills prior research across a range of fields into four dimensions of AI safety shaping AI’s catastrophic risks. Within each dimension, the report outlines each issue’s dynamics and relevance to catastrophic risk.

Safety DimensionQuestionIssues
New capabilitiesWhat dangers arise from new AI-enabled capabilities across different domains?
  • • Dangerous capabilities
  • • Emergent capabilities
  • • Latent capabilities
Technical safety challengesIn what ways can technical failures in AI-enabled systems escalate risks?
  • • Alignment, specification gaming
  • • Loss of control
  • • Robustness
  • • Calibration
  • • Adversarial attacks
  • • Explainability and interpretability
Integrating AI into complex systemsHow can the integration of AI into high-risk systems disrupt or derail their operations?
  • • Automation bias
  • • Operator trust
  • • The lumberjack effect
  • • Eroded sensitivity to operations
  • • Deskilling, enfeeblement
  • • Tight coupling
  • • Emergent behavior
  • • Release and proliferation
Conditions of AI developmentHow do the conditions under which AI tools are developed influence their safety?
  • • Corporate and geopolitical competitive pressures
  • • Deficient safety cultures
  • • Systemic underinvestment in technical safety R&D
  • • Social resilience
  • • Engineering memory life cycles

Though presented individually, in practice the issues described are most likely to lead to catastrophic outcomes when they occur in combination. Taken together, perhaps the most underappreciated feature of emerging catastrophic AI risks from this exploration is the outsized likelihood of AI catastrophes originating from China. There, a combination of the Chinese Communist Party’s efforts to accelerate AI development, its track record of authoritarian crisis mismanagement, and its censorship of information on accidents all make catastrophic risks related to AI more acute.

To address emerging catastrophic risks associated with AI, this report proposes that:

  • AI companies, government officials, and journalists should be more precise and deliberate in their use of terms around AI risks, particularly in reference to “catastrophic risks” and “existential risks,” clearly differentiating the two.
  • Building on the Biden administration’s 2023 executive order on AI, the departments of Defense, State, Homeland Security, and other relevant government agencies should more holistically explore the risks of AI integration into high-impact domains such as biosecurity, cybersecurity, finance, nuclear command and control, critical infrastructure, and other high-risk industries.
  • Policymakers should support enhanced development of testing and evaluation for foundation models’ capabilities.
  • The U.S. government should plan for AI-related catastrophes abroad that might impact the United States, and mitigate those risks by bolstering American resilience.
  • The United States and allies must proactively establish catastrophe mitigation measures internationally where appropriate, for example by building on their promotion of responsible norms in autonomous weapons and AI in nuclear command.

AI-related catastrophic risks may seem complex and daunting, but they remain manageable. While national security practitioners must appraise these risks soberly, they must also resist the temptation to over-fixate on worst-case scenarios at the expense of pioneering a strategically indispensable, powerful new technology. To this end, efforts to ensure robust national resilience against AI’s catastrophic risks go hand in hand with pursuing the immense benefits of AI for American security and competitiveness.

Introduction

Since ChatGPT was launched in November 2022, artificial intelligence (AI) systems have captured public imagination across the globe. ChatGPT’s record-breaking speed of adoption—logging 100 million users in just two months—gave an unprecedented number of individuals direct, tangible experience with the capabilities of today’s state-of-the-art AI systems. More than any other AI system to date, ChatGPT and subsequent competitor large language models (LLMs) have awakened societies to the promise of AI technologies to revolutionize industries, cultures, and political life. This public recognition follows from a growing awareness in the U.S. government that AI, in the words of the National Security Commission on Artificial Intelligence, “will be the most powerful tool in generations for benefiting humanity,” and an indispensable strategic priority for continued American leadership.

But alongside the excitement surrounding ChatGPT is growing alarm about myriad risks from emerging AI capabilities. These range from systemic bias and discrimination to labor automation, novel biological and chemical weapons, and even—some experts argue—the possibility of human extinction. The sudden explosion of attention to such diverse concerns has ignited fierce debates about how to characterize and prioritize such risks. Leading AI labs and policymakers alike are beginning to devote considerable attention to catastrophic risks stemming from AI specifically: OpenAI launched a purpose-built Preparedness team to address these risks, just as Anthropic crafted a Responsible Scaling Policy to “require safety, security, and operational standards appropriate to a model’s potential for catastrophic risk.” In November 2023, 28 countries signed the Bletchley Declaration, a statement resulting from the United Kingdom's (UK’s) AI Safety Summit, that likewise affirmed AI’s potential to produce “catastrophic” harms.

For national security practitioners, the maelstrom of often-conflicting opinions about the potential harms of AI can obscure emerging catastrophic risks with direct national security implications. Between the attention devoted to the range of harms AI is already causing in bias, discrimination, and systemic impacts on the one hand, and the focus on future-oriented debates about existential risks posed by AI on the other, these emerging catastrophic threats can be easily overlooked. That would be a major mistake: progress in AI could enable or contribute to scenarios that have debilitating effects on the United States, from enhanced bioterrorism to nationwide financial meltdowns to unintended nuclear exchanges. Given the potential magnitude of these events, policymakers urgently need sober analysis to better understand the emerging risks of AI-enabled catastrophes. Better clarity about the large-scale risks of AI need not inhibit the United States’ competitiveness in developing this strategically indispensable technology in the years ahead, as some fear. To the contrary, a more robust understanding of large-scale risks related to AI may help the United States to forge ahead with greater confidence, and to avoid incidents that could hamstring development due to public backlash.

This report aims to help policymakers understand catastrophic AI risks and their relevance to national security in three ways. First, it attempts to further clarify AI’s catastrophic risks and distinguish them from other threats such as existential risks that have featured prominently in public discourse. Second, the report explains why catastrophic risks associated with AI development merit close attention from U.S. national security practitioners in the years ahead. Finally, it presents a framework of AI safety dimensions that contribute to catastrophic risks.

Despite recent public alarm concerning the catastrophic risks of AI, the technology’s integration into high-risk domains is largely still in its nascent forms, especially when speaking of more powerful AI systems built using deep learning techniques that took off around 2011 and act as the foundation for more recent breakthroughs. Indeed, current deep learning–based AI systems do not yet directly alter existing catastrophic risks in any one domain to a significant degree—at least not in any obvious ways. Unanticipated present risks notwithstanding, this reality should elicit reassurance at a time of widespread anxiety about AI risks among Americans, as it gives both the government and industry an opportunity to help guide the technology’s development away from the worst threats. But this reality cannot encourage complacency: AI may pose very real catastrophic risks to national security in the years ahead, and some perhaps soon. The challenge for national security practitioners at this stage is to continuously monitor and anticipate emerging large-scale risks from AI as the technology rapidly evolves, often in unexpected ways, while the United States continues to ambitiously pursue AI’s transformative potential. To support that effort, this report proposes four key dimensions of AI safety—the technology’s novel capabilities, technical faults, integration into complex systems, and the broader conditions of its development—that will shape the risks of AI catastrophes going forward.

Read the full report:

Download PDF

  1. Krystal Hu, “ChatGPT Sets Record for Fastest-Growing User Base: Analyst Note,” Reuters, February 2, 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01.
  2. National Security Commission on Artificial Intelligence, Final Report, March 1, 2021, https://assets.foleon.com/eu-central-1/de-uploads-7e3kk3/48187/nscai_full_report_digital.04d6b124173c.pdf.
  3. “Frontier Risk and Preparedness,” OpenAI blog, October 26, 2023, https://openai.com/blog/frontier-risk-and-preparedness; “Anthropic’s Responsible Scaling Policy,” Anthropic, September 19, 2023, https://www.anthropic.com/news/anthropics-responsible-scaling-policy.
  4. “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023,” AI Safety Summit, November 1, 2023, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
  5. Anna Tong, “AI Threatens Humanity’s Future, 61% of Americans Say: Reuters/Ipsos Poll,” Reuters, May 17, 2023, https://www.reuters.com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17.

Authors

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a Fellow for the Technology and National Security Program at CNAS. His work focuses on Sino-American competition, artificial intelligence, and technology as an ...

  • Caleb Withers

    Research Associate, Technology and National Security Program

    Caleb Withers is a Research Associate for the Technology and National Security Program at CNAS, supporting the center’s initiative on artificial intelligence safety and stabil...

View All Reports View All Articles & Multimedia