September 27, 2018

Future of Life Institute: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Listen to the full conversation here.

  • Reports
    • March 13, 2025
    Safe and Effective

    Executive Summary With each passing year, the promise of artificial intelligence (AI) and autonomy to change the character of war inches closer to reality. This technology wil...

    By Josh Wallin

  • Commentary
    • March 13, 2025
    Sharper: Military Artificial Intelligence

    Since the atomic bomb, no technology has the potential to be as disruptive to warfare as artificial intelligence (AI). AI could deliver instant targeting solutions, develop hi...

    By Charles Horn

  • Commentary
    • Just Security
    • March 10, 2025
    The United States Must Avoid AI’s Chernobyl Moment

    The United States cannot let speculative fears trigger heavy-handed regulations that would cripple U.S. AI innovation....

    By Janet Egan

  • Podcast
    • March 4, 2025
    Lawfare Daily: Tim Fist and Arnab Datta on the Race to Build AI Infrastructure in America

    Tim Fist, senior adjunct fellow with the Technology and National Security Program at the Center for New American Security, and Arnab Datta, Director of Infrastructure Policy a...

    By Tim Fist

View All Reports View All Articles & Multimedia