August 13, 2024

AI and the Evolution of Biological National Security Risks

Capabilities, Thresholds, and Interventions

Executive Summary

Not long after COVID-19 gave the world a glimpse of the catastrophic potential of biological events, experts began warning that rapid advancements in artificial intelligence (AI) could augur a world of bioterrorism, unprecedented superviruses, and novel targeted bioweapons. These dire warnings have risen to the highest levels of industry and government, from the CEOs of the world's leading AI labs raising alarms about new technical capabilities for would-be bioterrorists, to Vice President Kamala Harris’s concern that AI-enabled bioweapons “could endanger the very existence of humanity.” If true, such developments would expose the United States to unprecedented catastrophic threats well beyond COVID-19’s scope of destruction. But assessing the degree to which these concerns are warranted—and what to do about them—requires weighing a range of complex factors, including:

  • The history and current state of American biosecurity
  • The diverse ways in which AI could alter existing biosecurity risks
  • Which emerging technical AI capabilities would impact these risks
  • Where interventions today are needed

This report considers these factors to provide policymakers with a broad understanding of the evolving intersection of AI and biotechnology, along with actionable recommendations to curb the worst risks to national security from biological threats.

The sources of catastrophic biological risks are varied. Historically, policymakers have underappreciated the risks posed by the routine activities of well-intentioned scientists, even as the number of high-risk biosecurity labs and the frequency of dangerous incidents—perhaps including COVID-19 itself—continue to grow. State actors have traditionally been a source of considerable biosecurity risk, not least the Soviet Union’s shockingly large bioweapons program. But the unwieldiness and imprecision of bioweapons has meant that states remain unlikely to field large-scale biological attacks in the near term, even though the U.S. State Department expresses concerns about the potential bioweapons capabilities of North Korea, Iran, Russia, and China. On the other hand, nonstate actors—including lone wolves, terrorists, and apocalyptic groups—have an unnerving track record of attempting biological attacks, but with limited success due to the intrinsic complexity of building and wielding such delicate capabilities.

Today, fast-moving advancements in biotechnology—independent of AI developments—are changing many of these risks. A combination of new gene editing techniques, gene sequencing methods, and DNA synthesis tools is opening a new world of possibilities in synthetic biology for greater precision in genetic manipulation and, with it, a new world of risks from the development of powerful bioweapons and biological accidents alike. Cloud labs, which conduct experiments on others’ behalf, could enable nonstate actors by allowing them to outsource some of the experimental expertise that has historically acted as a barrier to dangerous uses. Though most cloud labs screen orders for malicious activity, not all do, and the constellation of existing bioweapons norms, conventions, and safeguards leaves open a range of pathways for bad actors to make significant progress in acquiring viable bioweapons.

But experts’ opinions on the overall state of U.S. biosecurity range widely, especially with regard to fears of nonstate actors fielding bioweapons. Those less concerned contend that even if viable paths to building bioweapons exist, the practicalities of constructing, storing, and disseminating them are far more complex than most realize, with numerous potential points of failure that concerned parties either fail to recognize or underemphasize. They also point to a lack of a major bioattacks in recent decades, despite chronic warnings. A more pessimistic camp points to experiments that have demonstrated the seeming ease of successfully constructing powerful viruses using commercially available inputs, and seemingly diminishing barriers to the knowledge and technical capabilities needed to create bioweapons. Less controversial is the insufficiency of U.S. biodefenses to adequately address large-scale biological threats, whether naturally occurring, accidental, or deliberate. Despite COVID-19’s demonstration of the U.S. government’s inability to contain the effects of a major outbreak, the nation has made limited progress in mitigating the likelihood and potential harm of another, more dangerous biological catastrophe.

New AI capabilities may reshape the risk landscape for biothreats in several ways. AI is enabling new capabilities that might, in theory, allow advanced actors to optimize bioweapons for more precise effects, such as targeting specific genetic groups or geographies. Though such capabilities remain speculative, if realized they would dramatically alter states’ incentives to use bioweapons for strategic ends. Instead of risking their own militaries’ or populations’ health with the unwieldy weapons, states could sabotage other nations’ food security or incapacitate enemies with public health crises from which they would be unlikely to rebound. Relatedly, the same techniques could create superviruses optimized for transmissibility and lethality, which may considerably expand the destructive potential of bioweapons. Tempering these fears, however, are several technical challenges that scientists would need to overcome—if they can be solved at all.

The most pressing concern for biological risks related to AI stems from tools that may soon be able to accelerate the procurement of biological agents by nonstate actors. Recent studies have suggested that foundation models may soon be able to help accelerate bad actors’ ability to acquire weaponizable biological agents, even if the degree to which these AI tools can currently help them remains marginal. Of particular concern are AI systems’ budding abilities to help troubleshoot where experiments have gone wrong, speeding the design-build-test-learn feedback loop that is essential to developing working biological agents. If made more effective, emerging AI tools could provide a boon to would-be bioweapons creators by more dynamically providing some of the knowledge needed to produce and use bioweapons, though such actors would still face other significant hurdles to bioweapons development that are often underappreciated.

AI could also impact biological risks in other ways. Technical faults in AI tools could fail to constrain foundation models from relaying hazardous biological information to potential bad actors, or inadvertently encourage researchers to pursue promising medicinal agents with unexpected negative side effects. Using AI to create more advanced automated labs could expose these labs to many of the risks of automation that have historically plagued other complex automated systems, and make it easier for nonspecialists to concoct biological agents (depending upon the safety mechanisms that automated labs institute). Finally, heavy investment in companies and nations seeking to capitalize on AI’s potential for biotechnology could be creating competition dynamics that prioritize speed over safety. These risks are particularly acute in relation to China, where a variety of other factors shaping the country’s biotech ecosystem also further escalate risks of costly accidents.

Attempting to predict exactly how and when catastrophic risks at the intersection of biotechnology and AI will develop in the years ahead is a fool’s errand, given the inherent uncertainty about the scientific progress of both disciplines. Instead, this report identifies four areas of capabilities for experts and policymakers to monitor that will have the greatest impact on catastrophic risks related to AI:

  1. Foundation models’ ability to effectively provide experimental instructions for advanced biological applications
  2. Cloud labs’ and lab automation’s progress in diminishing the demands of experimental expertise in biotechnology
  3. Dual-use progress in research on host genetic susceptibility to infectious diseases
  4. Dual-use progress in precision engineering of viral pathogens

Careful attention to these capabilities will help experts and policymakers stay ahead of evolving risks in the years to come.

For now, the following measures should be taken to curb emerging risks at the intersection of AI and biosecurity:

  • Further strengthen screening mechanisms for cloud labs and other genetic synthesis providers
  • Engage in regular, rigorous assessments of the biological capabilities of foundation models for the full bioweapons lifecycle
  • Invest in technical safety mechanisms that can curb the threats of foundation models, especially enhanced guardrails for cloud-based access to AI tools, “unlearning” capabilities, and novel approaches to “information hazards” in model training
  • Update government investment to further prioritize agility and flexibility in biodefense systems
  • Long term, consider a licensing regime for a narrow set of biological design tools with potentially catastrophic capabilities, if such capabilities begin to materialize

Introduction

In 2020, COVID-19 brought the world to its knees, with nearly 29 million estimated deaths, acute social and political disruptions, and vast economic fallout. However, the event’s impact could have been far worse if the virus had been more lethal, more transmissible, or both. For decades, experts have warned that humanity is entering an era of potential catastrophic pandemics that would make COVID-19 appear mild in comparison. History is well acquainted with such instances, not least the 1918 Spanish Flu, the Black Death, and the Plague of Justinian—each of which would have dwarfed COVID-19’s deaths if scaled to today’s populations.

Equally concerning, many experts have sounded alarms of possible deliberate bioattacks in the years ahead. There is some precedent: in the weeks following 9/11, letters containing deadly anthrax spores were mailed to U.S. lawmakers and media outlets, and the attack could have been considerably worse had the perpetrator devised a more effective dispersion mechanism for the anthrax. The episode could portend a future in which more widely available biological capabilities mean malicious individuals and small groups devastate governments and societies through strategic biological attacks. Jeff Alstott, former director for technology and national security at the National Security Council, warned in September 2023 that the classified record contained “fairly recent close-ish calls” of nonstate actors attempting to use biological weapons with “strategic scale.”

Accurately weighing just how credible such dire warnings are can feel next to impossible, and requires clear judgment in the face of opaque counterfactuals, alarmism, denialism, and horrific possibilities. But regardless of their likelihood, the destructive potential of biological catastrophes is undeniably enormous: history is littered with examples of societies straining and even collapsing under the weight of diseases—from ancient Athens’s ruinous contagion during the Peloponnesian War, to the bubonic plague that crippled the Eastern Roman Empire in the 6th century, to the cataclysmic salmonella outbreak in the Aztec empire in the 16th century. It is essential that U.S. leaders soberly address the risks of biological catastrophe—which many claim will change dramatically in the age of artificial intelligence.

Government and industry leaders have expressed grave concerns about the potential for AI to dramatically heighten the risks of catastrophic events in general, and biological catastrophes in particular. In a July 2023 congressional hearing, Dario Amodei, CEO of leading AI lab Anthropic, stated that within two to three years, there was a “substantial risk” that AI tools would “greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.” Former United Kingdom (UK) Prime Minister Rishi Sunak similarly expressed urgent concern that there may only be a “small window” of time before AI enables a step change in bioterrorist capabilities. U.S. Vice President Kamala Harris warned of the threat of “AI-formulated bio-weapons that could endanger the lives of millions . . . [and] could endanger the very existence of humanity.” These are serious claims. If true, they represent a significant increase in bioterrorism risks. But are they true?

This report aims to clearly assess AI’s impact on the risks of biocatastrophe. It first considers the history and existing risk landscape in American biosecurity independent of AI disruptions. Drawing on a sister report, Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security, this study then considers how AI is impacting biorisks across four dimensions of AI safety: new capabilities, technical challenges, integration into complex systems, and conditions of AI development. Building on this analysis, the report identifies areas of future capability development that may substantially alter the risks of large-scale biological catastrophes worthy of monitoring as the technology continues to evolve. Finally, the report recommends actionable steps for policymakers to address current and near-term risks of biocatastrophes.

While the theoretical potential for AI to expand the likelihood and impact of biological catastrophes is very large, to date AI’s impacts on biological risks have been marginal. There is no way to know for certain if or when more severe risks will ultimately materialize, but careful monitoring of several capabilities at the nexus of AI and biotechnology can provide useful indications, including the effectiveness of experimental instructions from foundation models, changing demands of tacit knowledge as lab automation increases, and dual-use AI-powered research into host genetic susceptibility to infectious diseases and precision pathogen engineering. Lest they be caught off guard, policymakers should act now to shore up America’s biodefenses for the age of AI by strengthening screening mechanisms for gene synthesis providers, regularly assessing the bioweapons capabilities of foundation models, investing in a range of technical AI safety mechanisms, and preparing to institute licensing requirements for sophisticated biological design tools if they begin to approach potentially catastrophic capabilities.

Read the full report:

Download PDF

  1. Kamala Harris, “Remarks by Vice President Harris on the Future of Artificial Intelligence” (U.S. Embassy, London, November 1, 2023), https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom.
  2. Tejal Patwardhan et al., “Building an Early Warning System for LLM-Aided Biological Threat Creation,” OpenAI, January 31, 2024, https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation; Christopher A. Mouton, Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study (RAND Corporation, January 25, 2024), http://www.rand.org/pubs/research_reports/RRA2977-2.html; Anthropic, “Frontier Threats Red Teaming for AI Safety,” July 26, 2023, https://www.anthropic.com/index/frontier-threats-red-teaming-for-ai-safety; Emily H. Soice et al., “Can Large Language Models Democratize Access to Dual-Use Biotechnology?” (arXiv, June 6, 2023), https://doi.org/10.48550/arXiv.2306.03809.
  3. “The Pandemic’s True Death Toll,” The Economist, accessed April 15, 2024, https://www.economist.com/graphic-detail/coronavirus-excess-deaths-estimates.
  4. The 1918 Spanish Flu killed approximately 1 to 2 percent of the world’s population—equivalent to 70 to 150 million today: Alain Gagnon et al., “Age-Specific Mortality during the 1918 Influenza Pandemic: Unravelling the Mystery of High Young Adult Mortality,” PLoS ONE 8, no. 8 (August 5, 2013): e69586, https://doi.org/10.1371/journal.pone.0069586. The Black Plague killed about half of Europeans over a few years in the mid-1300s: Ole J. Benedictow, The Black Death 1346–1353: The Complete History (Woodbridge: Boydell Press, 2006).
  5. Advanced Technology: Examining Threats to National Security: Hearing before the Subcommittee on Emerging Threats and Spending Oversight of the Senate Committee on Homeland Security and Governmental Affairs, 118th Cong. (2023) (testimony of Jeff Alstott, senior information scientist, RAND Corporation), https://www.hsgac.senate.gov/subcommittees/etso/hearings/advanced-technology-examining-threats-to-national-security.
  6. Melissa De Witte, “How Pandemics Catalyze Social and Economic Change,” Stanford News, April 30, 2020, https://news.stanford.edu/2020/04/30/pandemics-catalyze-social-economic-change; Ewen Callaway, “Collapse of Aztec Society Linked to Catastrophic Salmonella Outbreak,” Nature 542 (February 23, 2017): 404, https://doi.org/10.1038/nature.2017.21485; Williamson Murray, “On Plagues and Their Long-Term Effects,” Hoover Institution, April 24, 2020, https://www.hoover.org/research/ plagues-and-their-long-term-effects.
  7. “Pause Giant AI Experiments: An Open Letter,” Future of Life Institute, March 22, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments; “Statement on AI Risk,” Center for AI Safety, May 30, 2023, https://www.safe.ai/statement-on-ai-risk#signatories.
  8. Oversight of A.I.: Principles for Regulation: Hearing before the Subcommittee on Privacy, Technology, and the Law of the Senate Judiciary Committee, 118th. Cong. (2023) (statement of Dario Amodei, CEO, Anthropic), https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_testimony_-_amodei.pdf.
  9. James Titcomb, “Britain Has One Year to Prevent AI Running Out of Control, Sunak Fears,” The Telegraph, September 25, 2023, https://www.telegraph.co.uk/business/ 2023/09/25/artificial-intelligence-create-bioweapons- warning.
  10. Harris, “Remarks by Vice President Harris on the Future of Artificial Intelligence.”
  11. Bill Drexel and Caleb Withers, Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security (Center for a New American Security, June 2024), https://www.cnas.org/publications/reports/catalyzing- crisis.

Authors

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a Fellow for the Technology and National Security Program at CNAS. His work focuses on Sino-American competition, artificial intelligence, and technology as an ...

  • Caleb Withers

    Research Assistant, Technology and National Security Program

    Caleb Withers is a Research Assistant for the Technology and National Security Program at CNAS, supporting the center’s initiative on artificial intelligence safety and stabil...

  • Reports
    • June 11, 2024
    Catalyzing Crisis

    The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (AI). In response, s...

    By Bill Drexel & Caleb Withers

  • Reports
    • February 20, 2024
    Biotech Matters

    Operation Warp Speed showed the power of the U.S. government to direct national biotech capabilities around a shared goal—in this case, a novel vaccine. But there are many oth...

    By Hannah Kelley

  • Commentary
    • April 4, 2024
    Response to NTIA Request for Comment: “Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights”

    In February 2024, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued a Request for Comment (RFC) on the implications of “ope...

    By Caleb Withers

  • Commentary
    • Foreign Policy
    • August 4, 2024
    The 1960s Novella That Got AI (Mostly) Right

    A secret military project. A vast artificial mind. Questions of consciousness. These form the premise of Dino Buzzati’s The Singularity, originally published in 1960 at the da...

    By Paul Scharre

View All Reports View All Articles & Multimedia