Now that this academic year’s loose ends are wrapped up, it is time to refocus attention on the important topic of norm development around autonomous weapons. Fortunately for my case study there is much developing on this front, in addition to developments in adjacent but (I argue) distinct issue areas like drones.
Anderson and Waxman’s discussion of distinction focuses only on whether such weapons could someday distinguish between civilians and combatants and not on that other element of the law on “indiscriminate attacks” regarding whether a given weapon is “controllable.” While codified law uses the language of “limiting” a weapon’s effects, this provision has been interpreted in customary terms as relating to the broader concept of “control,” suggesting a normative aversion to “uncontrollable” weapons.
The second quibble is that Anderson and Waxman seriously downplay the potential for a treaty-making process on this issue:
Limitations on autonomous military technologies, although quite
likely to find wide superficial acceptance among non-fighting states and some nongovernmental groups and actors, will have little traction. Even states and groups inclined to support treaty prohibitions or limitations will find it difficult to reach agreement on scope or definitions because lethal autonomy will be introduced incrementally – as battlefield machines become smarter and faster, and the real-time human role in controlling them gradually recedes, agreeing on what constitutes a prohibited autonomous weapon will be unattainable. Moreover, there are serious humanitarian risks to prohibition, given the possibility that autonomous weapons systems could in the long run be more discriminating and ethically preferable to alternatives; blanket prohibition precludes the possibility of such benefits. And, of course, there are the general challenges of compliance – the collective action problems of failure and defection that afflict all such treaty regimes.
I find this argument by assertion unconvincing. First, the likelihood that AWS will gain traction with states seems no logically weaker than that of blinding lasers or landmines – both weapons with military utility around which states were ultimately convinced by NGOs to establish treaty rules. The point on definitional ambiguities is well-taken, but this kind of problem afflicts all treaty-making processes and doesn’t necessarily doom them. In fact anti-AWS advocates are already grappling with precisely where and how they would draw that red line: we can expect norm development in this area to occur when they work out the messaging. The third point about humanitarian costs of norm development is a normative statement, not a prediction about the likelihood of norm development. Indeed, as Dick Price pointed out long ago, the history of weapons bans does not necessarily correspond in all cases to objective humanitarian reasoning. As for the challenges of compliance, this can’t be invoked to predict we’ll see no treaty since all treaties suffer from this problem but some do get negotiated anyway.
As for such constraints being “likely” to find support among NGOs, it is not only likely, it is already happening. More on that to follow.