July 29, 2015
CNAS Press Note: The Open Letter on the Dangers of Autonomous Weapons
Washington, July 29 – On news that many high-profile scientists and academics, including Stephen Hawking and Elon Musk, had signed an open letter warning of the dangers of autonomous weapons, Center for a New American Security (CNAS) Ethical Autonomy Project Director Paul Scharre has written a new press note, “The Open Letter on the Dangers of Autonomous Weapons.” In the press note, he lays out the important questions raised by the letter and how governments can better answer them.
The full press note is below:
This week, a number of science and technology visionaries, including Stephen Hawking and Elon Musk, launched an open letter warning of the dangers of autonomous weapons – weapons that would select and engage targets on their own.
They join a growing chorus of voices sounding the alarm about possible future autonomous weapons, including the International Committee of the Red Cross, the UN special rapporteur for extrajudicial killings, and a consortium of over 50 non-governmental organizations as part of a “Campaign to Stop Killer Robots.” Even the U.S Department of Defense has put in place strict guidelines on the use of autonomy in weapon systems.
The letter calls for a ban on “offensive autonomous weapons beyond meaningful human control,” but what does that mean? Most countries have departments or ministries of “defense,” not offense. How does one distinguish between offensive and defensive weapons? How does one define an “autonomous weapon”? And what does “meaningful human control” entail?
These are important issues that need further development. This letter is a valuable step forward in the ongoing dialogue on autonomous weapons by bringing into the mix the perspectives of engineers, roboticists, and scientists working at the cutting edge of artificial intelligence (AI). As military professionals, lawyers, ethicists, and peace activists grapple with the appropriate role of autonomy and human control in weapon systems, they should do so informed by an understanding how AI research is unfolding.
Scharre available for interviews on the letter. To arrange an interview, please contact Neal Urwitz at [email protected], or call 202-457-9409.