November 21, 2012
Rage Against the Machines
Human Rights Watch recently put out a
report
demanding a ban on fully autonomous weapons system and more scrutiny, as well
as additional legal controls, to regulate the development and proliferation of
robotic weapons. Human Rights Watch wants an international treaty prohibiting
weapons that either “deliver force under the oversight of a human operator who
can override the robot’s actions,” and “robots that are capable of selecting
targets and delivering force without any human input or interaction.” In their
report, seeking to advance policy to prevent indiscriminate warfare, they
instead perpetuate a large degree of misperceptions about the way in which
militaries operate in addition to needless fear-mongering about fully
autonomous weapons which are highly unlikely to ever exist.
The first major problem
is that HRW even has a category of “human-out-of-the-loop weapons” which are
supposedly going to enter modern warfare. This is, needless to say, a logically
ludicrous concept. No weapon is fully capable of taking humans out of the loop,
unless it is part of its own command structure. The distinction HRW draws
between “Human-on-the-Loop Weapons” and “Human-out-of-the-loop” weapons is
totally arbitrary, particularly since one key criteria HRW draws is that
“human-on-the-loop” weapons have human oversight which can override and veto
their actions, and “human-out-of-the-loop” weapons do not.
Unless the U.S. were
to design a weapon that, upon activation, simply began doing whatever its
programming told it to do and nothing else, a “human-out-of-the-loop” weapon
does not and will not exist. Furthermore, there is absolutely no reason for a
human to want to deploy such a weapon. Correct me if I’m wrong, but what
commanding officer wants to swap out his subordinates with a machine that is
going to be less responsive to his orders?
Weapons fall into a
command structure. Every weapon, regardless of its level of autonomy, will
conduct missions designed by humans and carried out under human orders,
supervised by humans with superior power over it. Indeed, comparable to human
subordinates, a “human-on-the-loop” weapon gives a commander more opportunities to
micromanage combat performance. If anything, a commander has fewer
opportunities to scapegoat subordinates for the actions of an autonomous
system.
HRW worries victims
of “fully autonomous” weapons cannot confront those who have wronged them in
court, which somehow obviates accountability through commanding officer.
Supposedly, the entity pulling the trigger is essential to the aversion and
prosecution of war crimes. But in this sense, robots do not change much of
anything. Artillery gunners and their commanding officers, for example,
frequently lack the information necessary to assess whether their fire mission
is fully lawful or ethical. They are dependent on the wisdom of the people
calling in and ordering the strike. The pilot of an F-16 flying at hundreds of
miles an hour frequently lacks adequate ability to judge whether his target,
particularly infrastructure targets, are legitimate or not. He relies, as a robotic
aircraft would rely, on the wisdom of those who collected the intelligence on
his targets, who have eyes on it from the ground who, if necessary, can correct
how he deploys his munitions.
There is nothing
inherently indiscriminate about an autonomous weapon, even if we assume it is
going to face permanent inability to assess every single criterion of
discriminate force vis-a-vis a human infantryman. An autonomous weapon using
conventional munitions ought be assessed contextually. A weapon or munition
that is discriminate for destroying a tank battalion in the open is probably
not discriminate for clearing snipers out of a populated urban center.
Some weapons are so
indiscriminate in a range of normal military contexts, and indeed without
redeeming virtues of strategic efficacy that might justify them as proportional
instruments, that banning them is relatively effective and prudent. There is
very little discrimination possible with a chemical weapon whose physical
nature makes selecting individual targets nigh impossible, or a biological
weapon, which, once deployed, will continue operating fully autonomously with
no possible human input. Not only that, but these weapons were so frequently
operationally or strategically useless - and indeed, very dangerous to one’s
own side - that it was entirely reasonable to put in place an outright
prohibition. Even then, many militaries frequently commit violations of
chemical weapons protocols by employing less-lethal gasses such as C-series
agents and white phosphorous that have legitimate uses when it seems tactically
prudent.
The attempt to
blanket ban autonomous weapons relies on a blanket presumption of failure to
discriminate that fails to take into account the way militaries operate. A
commanding officer deploying autonomous weapons should know the limits of his
system. An unmanned aerial system which can evade and engage hostile targets
should not be allowed to select target types such as civilian vehicles or
groups of individuals, nor should an autonomous weapon which cannot distinguish
between civilians and soldiers with high enough reliability be emplaced in a
city. This is just as we would not permit a jet attack aircraft to select and
engage its own ground targets in a similarly populated area.
Autonomous weapons
receive orders and can be programmed with rules of engagement. If these
safeguards fail occasionally, this is not a particularly convincing argument.
After all, look at the record of U.S. attempts to enforce roadblocks in Iraq.
An infantryman may, seeing a civilian vehicle speeding towards his checkpoint,
kill civilians in error because he is tried or concerned with
protecting his life and those of his fellow soldiers. Humans disobey orders and
make judgment calls about ROE or commander’s intent all the time, whether they
are in or out of the loop of their CO. Indeed, we could select a great number
of alternate scenarios where a robot that has no fear for its own life, and no
programmed ability to refuse or deviate from orders, may be more willing to
enforce a strict ROE to the letter. This is not to impugn human combatants or
to praise robots, but to note that autonomous weapons, like all weapons, will have limits and advantages.
So what if a
commander cannot discipline or punish a robot? He can do things that he cannot
do to a human deviating from orders. He can override its actions. Even if that
mechanism fails, he could remotely self-destruct or destroy the robot in the midst of its commission of war crimes. A robot
Calley is, in many ways, easier to deal with than a human war criminal. It is
much less ethically difficult to deactivate or destroy a malfunctioning robot
than to kill one of your men or women. Not only that, but the information
collected and stored by a robotic weapon would prove much more useful in the
prosecution of a war crime committed by a robot-operating unit than the
testimony of soldiers who must grapple with the limits of human sense, psyche,
and loyalty to each other.
HRW’s argument,
then, seems so overbroad as to likely be utterly ineffectual. Much as when
Britain tried to ban submarines during the Washington Naval Treaty as being
inherently indiscriminate and criminal because of the specific role they played
in World War I, no power with the capability to take advantage of the huge
military benefits of adopting these weapons is likely to forgo them for a
blanket treaty, if they even buy into such a treaty at all.
One might justify
HRW’s piece as starting a conversation - and I join that conversation by saying
the specifics of their proposal and the view they adopt of autonomous
technology are utterly ill-considered. There can and should be limitations on
the way weapons can be used. But for the great majority of weapons in the human
arsenal, these need to be thought of contextually rather than rigidly. Like it
or not, autonomous weapons are already present, and states are going to use
weapons with considerably efficacious attributes. But measuring the legality of
autonomous weapons against higly specific scenarios against the standards that
very often seem to ignore how militaries and the human beings in them behave on
the battlefield is the wrong way to start this debate, and certainly not a
sound foundation for credible regulations of their use. There are many
reasons to start a debate. What HRW appears to aim to do is to strangle it in
the crib on the basis of hyperbolic supposition. I strain to think of arms
control beginning from such a premise which has had lasting or beneficial
effect.