June 11, 2024
New CNAS Report Provides a Primer on Artificial Intelligence, Catastrophes, and National Security
Washington, June 11, 2024 – Today, the Center for a New American Security (CNAS) released a new report, Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security, by Bill Drexel and Caleb Withers.
The report provides an overview of the emerging catastrophic risks posed by artificial intelligence (AI) and their implications for national security. It discusses the global excitement and concern surrounding AI's rapid advancement and argues that efforts to mitigate large-scale AI risks often lead to confused and fragmented debates, making it difficult for policymakers to prioritize and address the most pressing threats.
The report underscores the necessity for national security professionals to focus on the catastrophic risks of AI particularly. These risks are more likely than the risks from existential AI—which received wider media attention—and demand near-term attention and could result in significant harm if not properly managed. The report identifies several high-risk domains where AI integration could lead to severe outcomes, including biosecurity, cybersecurity, military systems, and critical infrastructure. The report also cautions against excess fixation on worst-case scenarios, advocating for an approach that enables the United States to ambitiously and responsibly pursue the most transformative technology in a generation.
To address emerging catastrophic risks associated with AI, this report proposes that:
- AI companies, government officials, and journalists should be more precise and deliberate in their use of terms around AI risks, particularly in reference to “catastrophic risks” and “existential risks,” clearly differentiating the two;
- Building on the Biden administration’s 2023 executive order on AI, the departments of Defense, State, Homeland Security, and other relevant government agencies should more holistically explore the risks of AI integration into high-impact domains such as biosecurity, cybersecurity, finance, nuclear command and control, critical infrastructure, and other high-risk industries;
- Policymakers should support enhanced development of testing and evaluation for foundation models’ capabilities;
- The U.S. government should plan for AI-related catastrophes abroad that might impact the United States, and mitigate those risks by bolstering American resilience; and
- The United States and allies must proactively establish catastrophe mitigation measures internationally where appropriate, for example by building on their promotion of responsible norms in autonomous weapons and AI in nuclear command.
The report emphasizes that while AI-related catastrophic risks may seem daunting, they are manageable with proactive and informed policy measures. By addressing these challenges soberly, national security practitioners can leverage AI's immense benefits while safeguarding against its potential dangers.