November 15, 2017
Remarks by Paul Scharre to the United Nations Group of Governmental Experts on Lethal Autonomous Weapon Systems
Geneva, Switzerland
As we have heard this week, artificial intelligence and autonomy are rapidly advancing. While no nation has said they will build autonomous weapons, the technology will make such systems possible and, indeed, is already possible today for simple missions.
What would be the consequences if we were to delegate targeting decisions to machines? Would it make war more precise and humane, saving lives? Or would it lead to more accidents and less human responsibility?
A major challenge in answering these questions is the fact that the technology is constantly changing. 18 months ago, when this body last met in 2016, the AI research company DeepMind had just released AlphaGo, a computer program that beat the top human player at Go. To accomplish that feat, DeepMind trained AlphaGo on 30 million human moves so that it could learn how to play the game.
Last month, DeepMind released a new version, AlphaGo Zero, that learned how to play Go on its own without any human training data at all. Within a mere 3 days of self-play, it was good enough to beat the version from 2016 100 games to zero.
With technology moving forward at this pace, what will be possible 10 or even 5 years from now?
If we agree to foreswear some technology, we could end up giving up some uses of automation that could make war more humane. On the other hand, a headlong rush into a future of increasing autonomy, with no discussion of where it is taking us, is not in humanity’s interests either. We should control our destiny.
Instead, we should ask: “What role do we want humans to have in lethal decision-making in war?”
It is important to understand the technology, but to answer this question we need to focus on the human. The technology changes, but the human stays the same.
What decisions in war require uniquely human judgment? If we had all of the technology we could imagine, what decisions would we still want people to make in war, and why?
This concept has been formulated many ways and many states have expressed the importance of meaningful, appropriate, or necessary human judgment or control. The specific term is less important. What is important is that states continue to explore the meaning behind these terms in order to better understand the legal, moral, operational, and strategic rationale for human involvement in the use of force.
This perspective – focusing on the human – can be our guiding light for navigating our way through this period of technological change.
Paul Scharre (@paul_scharre) is a senior fellow at the Center for a New American Security and author of the forthcoming book Army of None: Autonomous Weapons and the Future of War, to be published in April 2018.
The remarks are available online.
More from CNAS
-
Artificial Intelligence and Arms Control
Advances in artificial intelligence (AI) pose immense opportunity for militaries around the world. With this rising potential for AI-enabled military systems, some activists a...
By Paul Scharre & Megan Lamberth
-
Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities
These seven new principles concentrate on the responsible use of autonomous functionalities in armed conflict in ways that preserve human judgment and responsibility over the ...
By Robert O. Work
-
Are ‘killer robots’ the future of warfare?
Paul Scharre joins host Suzanna Kianpour to discuss the technology, fears and even potential advantages of developing autonomous weapons. Listen to the full conversation from...
By Paul Scharre
-
Episode 26 - Paul Scharre
What are autonomous weapons systems? How are they used in modern warfare? And how do we strengthen international cooperation? In this episode, the Director of the Technology a...
By Paul Scharre