March 04, 2022

Putting Principles into Practice: How the U.S. Defense Department is Approaching AI

The U.S. Defense Department (DoD) is wrestling with how to institutionalize the concept of “Responsible AI” (RAI) – the belief that artificial intelligence (AI) systems should be developed and deployed safely, securely, ethically, and responsibly. The idea of RAI is built upon a years-long effort by the DoD to articulate and implement policies and principles around the appropriate and ethical use of AI capabilities.

RAI is the next step in this progression. In a May 2021 memo, Deputy Secretary of Defense Kathleen Hicks explained that it was critical for the DoD to create a “trusted ecosystem” in AI, that “not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public.” The memo tasked the Joint Artificial Intelligence Center (JAIC) – a central body that seeks to synchronize AI activity across the DoD – with coordinating the development and implementation of RAI policies and guidance.

While the application of RAI is still at a nascent stage, the DoD’s continued messaging and prioritization of safe and ethical AI is important, and shows that the Pentagon’s interest is not waning.

The Defense Department, and the JAIC in particular, will have to keep momentum strong, working to ensure RAI principles and practices are ultimately digestible, actionable, and repeatable across the DoD’s myriad components.

Defense Department’s Progress on AI

The concept of RAI is the result of nearly four years of effort by the Defense Department to define its AI strategy and priorities. The DoD first laid the groundwork in June 2018 with the creation of the JAIC, and released its first AI strategy eight months later, which called for the adoption of “human-centered” AI. The strategy also promised U.S. leadership in the “responsible use and development of AI” by articulating a set of guiding principles.

Those guiding principles were conceived of and established by the Defense Innovation Board (DIB)—a federal advisory committee of technology experts—in October 2019, and were adopted by the Defense Department three months later. The five ethical principles—Responsible, Equitable, Traceable, Reliable, and Governable—were meant to serve as foundational guidance for the Defense Department’s approach toward AI.

Read the full article from SRiS.

  • Commentary
    • TIME
    • February 20, 2025
    As Trump Reshapes AI Policy, Here’s How He Could Protect America’s AI Advantage

    The nation that solidifies its AI advantage will shape the trajectory of the most transformative technology of our era....

    By Janet Egan, Paul Scharre & Vivek Chilukuri

  • Commentary
    • Lieber Institute
    • February 19, 2025
    Ukraine Symposium – The Continuing Autonomous Arms Race

    This war-powered technology race does not appear to be losing steam, and what happens on the battlefields of Ukraine can potentially define how belligerents use military auton...

    By Samuel Bendett

  • Commentary
    • Lawfare
    • February 14, 2025
    Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs

    While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....

    By Ruby Scanlon

  • Reports
    • February 13, 2025
    Averting AI Armageddon

    In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitat...

    By Jacob Stokes, Colin H. Kahl, Andrea Kendall-Taylor & Nicholas Lokker

View All Reports View All Articles & Multimedia