Artificial Intelligence Safety and Stability
Nations around the world are investing in artificial intelligence (AI) to improve their military, intelligence, and other national security capabilities. Yet AI technology, at present, has significant safety and security vulnerabilities. AI systems could fail, potentially in unexpected ways, due to a variety of causes. Moreover, the interactive nature of military competition means that one nation’s actions affect others, including in ways that may be detrimental to mutual stability. There is an urgent need to explore actions that can mitigate these risks, such as improved processes for AI assurance, norms and best practices for responsible AI adoption, and confidence-building measures that improve stability among all nations.
The Center for a New American Security (CNAS) Artificial Intelligence Safety and Stability project aims to better understand AI risks and specific steps that can be taken to improve AI safety and stability in national security applications. Major lines of effort include:
- Compute Governance: Compute is emerging as a key lever for AI governance as a consequence of technological and geopolitical trends. Cutting-edge machine learning systems continue to rely on ever-larger models, datasets, and compute training. This project seeks to better understand and shape opportunities for compute governance, including identifying which government policies, private sector actions, and/or technological developments would increase or decrease the controllability of compute in the future.
- Mitigating Catastrophic AI Failures: As AI becomes integrated into more industries, it will inevitably be used in high-consequence applications where failures could have significant ramifications. This project focuses on anticipating, preventing, and mitigating catastrophic AI failures, to include both accidents and unauthorized use.
- U.S. Department of Defense AI Test and Evaluation: Military AI systems must be reliable, secure, and trusted by end-users. Mistakes on the battlefield can have dire consequences. How can DoD create test, evaluation, verification, and validation (TEVV) practices that are appropriate for AI and machine learning systems? What new DoD standards and metrics are needed for AI? This project focuses primarily on direct engagement with the defense community, including military and civilian DoD officials, scientists at DoD research labs, and the defense industry.
- Understanding Chinese Decision-making on AI: U.S. policymakers are highly concerned about Chinese advances in AI, especially by China’s military, the People’s Liberation Army. Yet there remains a significant gap in U.S. understanding of how China might employ AI for military purposes. This project aims to increase U.S. policymakers’ understanding about how Chinese developments in AI might affect stability between the United States and China and develop recommendations for mitigating stability risks.
- Understanding Russian Decision-making on AI: Russia has demonstrated a high willingness to implement automation and autonomy within its military forces, often despite shortfalls in performance. This project aims to increase U.S. policymakers’ understanding about Russian decision-making on AI and stability, with an eye towards better understanding how Russia might employ AI in its military forces and how such uses might affect U.S.-Russian stability.
This cross-program effort includes the CNAS Technology and National Security, Defense, Indo-Pacific Security, Transatlantic Security, and Energy, Economics, and Security programs. CNAS experts will share their findings in public reports and policy briefs with recommendations for policymakers.
This project is made possible with the generous support of Open Philanthropy.
Further Reading
CNAS Experts
-
Paul Scharre
Executive Vice President and Director of Studies
-
Stacie Pettyjohn
Senior Fellow and Director, Defense Program
-
Andrea Kendall-Taylor
Senior Fellow and Director, Transatlantic Security Program
-
Emily Kilcrease
Senior Fellow and Director, Energy, Economics and Security Program
-
Jacob Stokes
Senior Fellow and Deputy Director, Indo-Pacific Security Program
-
Janet Egan
Senior Fellow, Technology and National Security Program
-
Bill Drexel
Fellow, Technology and National Security Program
-
Josh Wallin
Fellow, Defense Program
-
Michael Depp
Research Associate, AI Safety and Stability Project
-
Noah Greene
Research Assistant, AI Safety and Stability Project
-
Caleb Withers
Research Associate, Technology and National Security Program
-
Tim Fist
Senior Adjunct Fellow, Technology and National Security Program