November 07, 2024
Controlling the danger: managing the risks of AI-enabled nuclear systems
Recent developments in artificial intelligence (AI) have accelerated the debate among US and European policymakers about the opportunities and risks of military AI. In many cases, this debate is an attempt to answer four simple questions with murky answers: what are the military advantages of AI? What are its risks? How can we balance the risks and opportunities? And how do we prevent these risks from devolving into a greater crisis? This paper deals with one of the risks: the further integration of AI and autonomous tools into the nuclear command, control, and communications (NC3) networks of nuclear-armed powers. I start by outlining some of the ways AI has previously been used in NC3 systems, then examine the risks that these systems pose, and finally offer some solutions that the international community can deploy.
Read the full article from the NATO Defense College.
More from CNAS
-
The United States Must Avoid AI’s Chernobyl Moment
The United States cannot let speculative fears trigger heavy-handed regulations that would cripple U.S. AI innovation....
By Janet Egan
-
Lawfare Daily: Tim Fist and Arnab Datta on the Race to Build AI Infrastructure in America
Tim Fist, senior adjunct fellow with the Technology and National Security Program at the Center for New American Security, and Arnab Datta, Director of Infrastructure Policy a...
By Tim Fist
-
Expect Trump Administration to Come Down Hard on Chip Exports, Says Chinatalk’s Jordan Schneider
Jordan Schneider, an adjunct fellow at the Center for a New American Security (CNAS) and MCC Global’s Michelle Caruso-Cabrera, join ‘Power Lunch’ to discuss the chips trade an...
By Jordan Schneider
-
International Governance of AI
Ruby Scanlon, research assistant for the Technology and National Security Program at the Center for a New American Security (CNAS), partook in a panel which discussed the comp...
By Ruby Scanlon