January 25, 2023

How ‘Killer Robots’ Can Help Us Learn from Mistakes Made in AI Policies

The use of lethal robots for law enforcement has turned from a science fiction concept to news snippets, thanks to recent high-profile debates in San Francisco and Oakland, Calif., as well as their actual use in Dallas. The San Francisco Board of Supervisors voted 8-3 to grant police the ability to use ground-based robots for lethal force when “when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or other de-escalation tactics.” Following immediate public outcry, the board reversed course a week later and unanimously voted to ban the lethal use of robots. Oakland underwent a less public but similar process, and in January the Dallas Police Department used a robot to end a standoff.

While recent events may have sparked a public outcry over the dangers of “killer robots,” we should not lose sight of the danger that poor processes create when deploying AI systems.

All of these events illustrate major pitfalls with the way that police currently use or plan to use lethal robots. Processes are rushed or nonexistent, conducted haphazardly, do not involve the public or civil society, and fail to create adequate oversight. These problems must be fixed in future processes that authorize artificial intelligence (AI) use in order to avoid controversy, collateral damage and even international destabilization.

The chief sin that a process can commit is to move too quickly. Decisions about how to use AI systems require careful deliberation and informed discussion, especially with something as high-stakes as the use of lethal force. A counter example here is the Department of Defense (DOD) Directive 3000.09, which covers the development and deployment of lethal autonomous systems. Because it lacks clarity for new technology and terminology, this decade-old policy is in the process of a lengthy, but deliberate, update.

Read the full article from the The Hill.

  • Commentary
    • TIME
    • February 20, 2025
    As Trump Reshapes AI Policy, Here’s How He Could Protect America’s AI Advantage

    The nation that solidifies its AI advantage will shape the trajectory of the most transformative technology of our era....

    By Janet Egan, Paul Scharre & Vivek Chilukuri

  • Commentary
    • Lieber Institute
    • February 19, 2025
    Ukraine Symposium – The Continuing Autonomous Arms Race

    This war-powered technology race does not appear to be losing steam, and what happens on the battlefields of Ukraine can potentially define how belligerents use military auton...

    By Samuel Bendett

  • Commentary
    • Lawfare
    • February 14, 2025
    Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs

    While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....

    By Ruby Scanlon

  • Reports
    • February 13, 2025
    Averting AI Armageddon

    In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitat...

    By Jacob Stokes, Colin H. Kahl, Andrea Kendall-Taylor & Nicholas Lokker

View All Reports View All Articles & Multimedia