November 02, 2023

Time to Act: Building the Technical and Institutional Foundations for AI Assurance

In March 2022, an Indian BrahMos missile launched from Sirsa landed in a sparsely populated region of Mian Channu in Pakistan. The episode, which India described as an accident, fortunately, caused no casualties but led to fears that Pakistan might respond in kind, fueling escalation. A few months later, similar uncertainty surrounding the origin of a missile that exploded in Przewodów, Poland during a particularly acute period of Russian bombardment of Ukrainian energy facilities triggered fears that NATO might find itself at war. And perhaps the most famous close call was the nuclear false alarm in 1983 when a Soviet early warning system detected the launch of five U.S. intercontinental ballistic missiles. Stanislav Petrov’s decision to ignore that warning is credited with avoiding nuclear war. The integration of AI tools for targeting and engagement, as well as early warning and decision support functions, might make these close calls more common. In light of this increased risk, efforts to increase confidence that AI-enabled systems are acting as designed become increasingly important.

Assurance for AI systems presents unique challenges.

While the challenges facing policymakers in the realm of AI governance are significant, they are not insurmountable.

Over the last several years, analyses of potential future conflicts have stressed the growing role that autonomous systems will play across domains and functions. As these systems are integrated into global militaries, it is increasingly likely that accidents could lead to escalation (inadvertent or advertent) in the face of improperly tested and evaluated platforms. Collaboration with both allies and adversaries on testing and evaluation has the potential to reduce these accidents and the consequent escalation of conflicts, driving compliance with international law. Establishing international standards and norms about the employment of AI in safety-critical contexts is the prudent way forward for this collaboration.

Read the full article from Lawfare.

  • Podcast
    • February 4, 2025
    The Just Security Podcast: Diving Deeper into DeepSeek

    The DeepSeek saga raises urgent questions about China’s AI ambitions, the future of U.S. technological leadership, and the strategic implications of open-source AI models. How...

    By Keegan McBride

  • Podcast
    • February 4, 2025
    How Long Will Deterrence Hold?

    Mike hosts Michèle Flournoy, former Co-Founder and Chief Executive Officer of the Center for a New American Security (CNAS), where she currently serves as Chair of the Board o...

    By Michèle Flournoy

  • Podcast
    • February 3, 2025
    The Implications of DeepSeek

    When the Chinese artificial intelligence company DeepSeek unveiled its AI chatbot just weeks ago, it shook up the U.S. tech industry and set off an AI competition. DeepSeek sa...

    By Jordan Schneider

  • Video
    • January 31, 2025
    The Brute Force Method for Training AI Models Is Dead, Says Full-Stack Generative AI CEO May Habib

    Full-Stack Generative AI CEO May Habib and Jordan Schneider, adjunct fellow in the Technology and National Security Program, join 'Power Lunch' to discuss Nvidia, Singapore an...

    By Jordan Schneider

View All Reports View All Articles & Multimedia