Parsing the DOD’s Directive on Autonomous Weapons

27 November 2012, 1701 EST

Yes. Only two days after Human Rights Watch launched its “preemptive call” to ban the development and deployment of such systems, the US Defense Department doubled down with a document (shorter version here) that claims:

“Autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Comforting? On the surface this sounds like a nod to the idea that human judgment is crucial in lethal targeting decisions in war and cannot be outsourced to machines. Spencer Ackerman seems to think so:

The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.

But I’m not sure the document is so clear-cut. The term “appropriate levels of human judgment” leaves a lot open to interpretation, especially since a debate continues to rage about whether humans are in fact inferior judges of when to pull the trigger.

Consider this: by the directive’s own definition such weapons will include not only “human-supervised autonomous weapons” that could be shut down by a human in the event of a weapon system failure, but apparently non-human supervised “autonomous weapons” that would implicitly lack such safeguards. In other words the DOD typology imagines a level of autonomy beyond the “human out of the loop” weapons problematized in Human Rights Watch’s new report. The difference between the two typlogies is that HRW bases its degrees of autonomy on how much capacity humans have to select targets, with “human out of the loop” the point at which machines can select targets without human input. But in the DOD document, degrees of autonomy appear to be based on how much power humans have to intervene, with the ability to intervene at all reserved for “human-supervised” systems, but no requirement that all autonomous systems include such a fail-safe. (In other words, this DefenseNews press release is badly titled to say the least.)

On the other hand presumably, according to the directive, such weapons could not be used for lethal force; and even human-supervised autonomous systems – the ones HRW considers “fully
autonomous” – can only be turned on other weapons systems, but not used to target humans Mark Gubrud of the International Committee on Robot Arms Control, an association of scientists and philosophers with varying views on how to achieve regulation of autonomous weaponry, writes at the ICRAC blog:

This policy is basically, “Let the machines target other machines; Let men target men.” Since the most compelling arms-race pressures will arise from machine-vs.-machine confrontation, this solution is a thin blanket, but it suggests some level of sensitivity to the issue of robots targeting humans without being able to exercise “human judgment” — a phrase that appears repeatedly in the DoD Directive. This approach seems calculated to preempt the main thrust of HRW’s report, that robots cannot satisfy the principles of distinction and proportionality as required by international humanitarian law, therefore AWS should never be allowed.

But it seems to me that this document aims to provide more ambiguity and flexibility than structure. Proposed limitations and distinctions can be over-ridden by the proper authorities, namely several under-secretaries of defense prior to the development stage. While in theory the policy appears to state that machines will target other machines only,  in reality it’s not that simple if the goal is to get around the discrimination / proportionality debate: target a generator and you may kill dialysis patients. Target any particular military objective and you may hit civilians in the area. Since this will be obvious to humanitarian law types, if this document was indeed created to defuse emerging global concern that such systems by their nature may not meet war law standards, I suspect it is unlikely to succeed. Indeed I’m not so sure that is the purpose of the document, since probably the strongest and least ambiguous policy statement is the injunction on p. 11 that:

“Legal reviews of autonomous and semi-autonomous weapon systems [must be] conducted… [to] ensure consistency with all applicable domestic and international law and, in particular, the law of war.”

The law of war, of course, specifies in Article36 of Additional Protocol 1 to the Geneva Conventions entitled “New Weapons”that:

“In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.”

and, in Article 50(3c) that indiscriminate attacks are prohibited and include:

“those which employ a method or means of combat the effects of which cannot be limited…”

Which means the question of such weapons’ ability to be used discriminately requires they be used in a manner that can be limited, and that this must be addressed in such reviews if the US is to be in compliance with “the law of war.” I think a strong case could be made that the DOD’s definition of “autonomous weapons” as stated this week would fall outside that definition; quite possibly the authors of the document understand this. Now there’s a catch: the US is not a party to Additional Protocol 1. So a very interesting conversation is now going to begin. At first, it will be about whether the US is nonetheless bound by that rule under customary law, or whether it binds only governments who have signed it. If the US rests its case on its non-party status that will strengthen legal arguments that such a prohibition is implicit for states parties, strengthening the push for a non-use norm (historically the US has tended to follow such norms even when not legally bound). If the US does not so rest its case, then a second interesting conversation will begin, whether the US is obligated to engage in it or not, which is whether any of the weapons systems described as “autonomous” in the report could by their nature be limited in the way described once released from human control with no ability for humans to override; and whether in fact human-supervised autonomous systems would meet this criteria given that they would undoubtedly cause at least incidental human casualties. In other words, if the DoD takes its own directive seriously it will have no choice but to address the arguments being put forth by human rights groups. And given the sudden certainty that these systems are very clearly going to be science fact, and are envisioned to be highly autonomous, the US has probably both strengthened the “killer robot” ban movement’s hand. On the other hand, by suggesting a level of autonomy far beyond what was already feared, it is also shifting the goal posts as the conversation is getting started.

Stay tuned…