August 13, 2024
New CNAS Report on AI and Biological National Security Risks
Washington, August 13, 2024 — Today, the Center for a New American Security (CNAS) released a new report, AI and the Evolution of Biological National Security Risks: Capabilities, Thresholds, and Interventions, by Bill Drexel and Caleb Withers.
The report analyzes the evolving intersection between artificial intelligence (AI) and biological national security risks, highlighting the increasing concern among experts about the potential for AI advancements to enable bioterrorism, create unprecedented superviruses, and develop novel targeted bioweapons. If realized, such advancements could expose the United States to catastrophic threats far exceeding the impact of COVID-19.
The report underscores the history and current state of American biosecurity, emphasizing the diverse ways AI could alter existing risks. AI's potential to optimize bioweapons for targeted effects, such as pathogens tailored to specific genetic groups or geographies, could significantly shift states' incentives to use such weapons for strategic purposes. Moreover, AI tools could soon enable non-state actors, including terrorists and lone wolves, to accelerate the procurement of biological agents, posing new biosecurity challenges. While these capabilities remain speculative, if achieved, they would dramatically alter the landscape of national security.
To address these emerging threats, the report proposes several actionable recommendations. It calls for strengthening screening mechanisms for cloud labs and genetic synthesis providers, conducting rigorous assessments of foundation models' biological capabilities throughout the bioweapons lifecycle, and investing in technical safety mechanisms to curb threats posed by foundation models. Additionally, the report emphasizes the need to update government investments to prioritize agility and flexibility in biodefense systems and considers long-term measures such as a licensing regime for biological design tools with potentially catastrophic capabilities.
While the threat of AI-powered biocatastrophes remains largely speculative, the current biological safeguards already need significant updates. Proactive measures now can help policymakers and experts stay ahead of evolving risks, ensuring a trajectory of biosecurity that protects innovation and public safety. AI-enabled biological catastrophes, though daunting, are far from inevitable.
For more information or to arrange an interview with the report authors, please contact Alexa Whaley at [email protected].