April 23, 2018

The promise and peril of military applications of artificial intelligence

Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminator franchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom’s concern about the existential risk to humanity posed by artificial intelligence to Telsa founder Elon Musk’s concern that artificial intelligence could trigger World War III to Vladimir Putin’s statement that leadership in AI will be essential to global power in the 21st century.

What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon. Instead, artificial intelligence, from a military perspective, is an enabler, much like electricity and the combustion engine. Thus, the effect of artificial intelligence on military power and international conflict will depend on particular applications of AI for militaries and policymakers. What follows are key issues for thinking about the military consequences of artificial intelligence, including principles for evaluating what artificial intelligence “is” and how it compares to technological changes in the past, what militaries might use artificial intelligence for, potential limitations to the use of artificial intelligence, and then the impact of AI military applications for international politics.

Read the full article at The Bulletin of the Atomic Scientists

  • Podcast
    • October 17, 2024
    U.S. Chip Controls and the Future of AI Compute

    That escalated quickly! Emily and Geoff discuss why the U.S. aim to deny China access to the computing power necessary for frontier AI capabilities has led to an ever expandin...

    By Emily Kilcrease, Geoffrey Gertz & Pablo Chavez

  • Podcast
    • October 11, 2024
    Asymmetry and AI: The Battle for Power

    Paul Scharre, Vice President and Director of Studies at CNAS, joins Zero Pressure to discuss the world of asymmetric warfare, a term used to describe imbalances in conflict. F...

    By Paul Scharre

  • Commentary
    • Just Security
    • September 19, 2024
    Competition, Not Control, is Key to Winning the Global AI Race

    The United States, with much of the world’s AI-enabling infrastructure, has positioned itself as the global leader in AI innovation. That might not be the case for much longer...

    By Keegan McBride & Matthew Mittelsteadt

  • Commentary
    • Time
    • September 16, 2024
    Regulating AI Is Easier Than You Think

    Countries can regulate AI from the ground up by controlling access to highly specialized chips...

    By Paul Scharre

View All Reports View All Articles & Multimedia