Since 2014 the international community has considered the issue of autonomy in weapons systems under the framework of the United Nations (UN) Convention on Certain Conventional Weapons (CCW). Despite hopes that 2022 would see some kind of breakthrough, the 2019 eleven guiding principles remains the only international agreement regarding lethal autonomous weapons systems (LAWS or AWS). Many believe that action is long overdue.
The Campaign to Stop Killer Robots argues that:
With ongoing uncertainties around technological change and instabilities in international security, a principled international legal framework would provide the necessary durability and certainty around use of autonomy in weapons systems to overcome the risk of their widespread proliferation and use around the world.
Many governments agree. According to the Austrian delegation (mp3), “it is time to turn up the speed” as the world is in “a race against undesirable and unpredictable consequences” of technological progress.
Unlike most previous humanitarian disarmament campaigns (with the exception of the ban on blinding laser weapons), there is no statistical evidence of widespread harm, photographs of victims, or testimonies of survivors to document the problem. On the contrary, both the precise nature and degree of threat posed by LAWS are highly uncertain.
How is it possible to build robust norms under such conditions? Rather than an inherent property of things and contexts, (un)certainty is a social construct that needs to be continuously stabilized or enacted through discursive and material practices. Uncertainty is simultaneously a limit to and an object of governance. It can therefore either drive or undermine the creation of anticipatory norms.
Broadly speaking, uncertainty concerns the limits of knowledge and understanding. In the context of future problems, uncertainty takes at least two forms. Ontological uncertainty denotes the degree of empirical knowledge that we have or lack about a problem. Is there such a thing as AWS? And how will these systems affect shared expectations of appropriateness in the conduct of war? Epistemic uncertainty, in turn, concerns the process of knowledge production, i.e. the limits of what is methodologically knowable. Here the key question is: how can we know about a weapon that is not yet operational on a larger scale?
How can we know about AWS?
Autonomous weapons systems seem to exhaust conventional ways of knowing, but campaigners have found a number of ways to address epistemic uncertainty. One of them is through the use of analogies that relate AWS to existing technologies or weapons. For example, observers sometimes compare AWS to weapons systems that already exhibit increasing degrees of autonomy or automation, as these already highlight some of the problems that emerge from human-machine interaction.
Simulations provide another way to translate an abstract and unclear future into something concrete and tangible. The United Nation Institute for Disarmament Research (UNIDIR) has set up a number of tabletop exercises with governmental representatives to consider which scenarios of AWS use are problematic and which not.
uncertainty… has provided a pretext for inaction
Simulations and modeling will also play a viral role in the testing and evaluation (T&E) of AWS, particularly those equipped with Artificial Intelligence (AI) components. The main purpose of T&E is to troubleshoot problems and quantify risks associated with the use of a future weapon. U.S. officials have repeatedly claimed that T&E can help ensure that AWS will operate as intended — and in adherence of existing international humanitarian law. In other words, T&E will make it possible to reduce, if not eliminate, uncertainty about unintended consequences of AWS.
While T&E attempts to reduce uncertainty by making potential risks of AWS calculable, activists have also harnessed the power of imaginative ways of knowing. Consider the very detailed description of the catastrophic consequences of ‘not stopping killer robots’ as displayed in fictional movies such as Slaughterbots or The Threat of Fully Autonomous Weapons. The ease with which such harmful consequences can be brought to attention can increase the perceived likelihood of a problem and thereby increase its urgency.
Do AWS exist?
Whether AWS are conceived of as a distant development or a very near-term problem affects efforts to build norms about their design and use. Are they an urgent threat, one which states must address preemptively? Are they still a remote possibility, one that does not require immediate attention, especially given many other pressing threats?
To date, uncertainty about the timeline for the development of AWS has provided a pretext for inaction. Similarly, the insistence by some states that negotiations cannot proceed before all the parties agree on a definition of AWS looks like an attempt to inject uncertainty in order to delay to any agreement. Indeed, some governments have set the bar for what constitutes autonomy in weapons systems so high that they are effectively declaring AWS non-existent — not only for now, but also for the foreseeable future.
campaigners have… shifted.. to a[n]… argument about the very nature of AWS.
It is therefore not surprising that discussions in the CCW and the media have disproportionately focused on the far end of the autonomy spectrum: Terminator-like scenarios. The heavy influence of science fiction tropes has in no small measure been nourished by the anti-AWS-movement itself, which took the position that AWS generally do not currently exist. This has also proved counter-productive in the long-run. While certainly helpful in drawing attention to the issue, the “killer robots” meme reinforces the impression that AWS belong to a distant future, playing into the hands of those who declare efforts to regulate AWS as premature.
To offset this futuristic vibe, pro-ban advocates have made some efforts to “de-science fictionalize” the issue. They point out that autonomy in weapons systems is a process that is well underway, with major states already developing weapons with significant autonomy in the critical functions of selecting and attacking targets, including loitering weapons, certain air defense systems, and robotic sentry guns.
Recent events, activists contend, move us indeed closer to the age of killer robots. As first described in a UN report, the world may have witnessed the first use of a fully autonomous weapon — the Turkish-made Kargu-2 drone — in Libya. While, technically speaking, it remains an open question whether the drone has been effectively used in an autonomous fashion, what matters is that proponents of a ban can leverage the incident as a practical demonstration that the future is already here.
What are the consequences of using AWS?
With emerging technologies like AWS, we also seem to lack knowledge about the undesirable (and unintended) versus the desirable (and intended) consequences of their use. Yet, rather than being intrinsic to AWS, normative uncertainty (or ambiguity) has been strategically leveraged by skeptics of a ban to avert “hasty decisions.” They demand further data and research before action is taken because technological progress could ensure that AWS are used in compliance with international humanitarian law.
What currently hampers progress… is the contested nature of uncertainty
In response, campaigners have gradually shifted away from an argument about the likely consequences of autonomous weapons use to a deontological argument about the very nature of AWS. Even if AWS were capable of distinguishing between civilians and combatants, killing humans based on data collected by sensors and processed by algorithms would still be fundamentally wrong. It would further dehumanize warfare by reducing humans, whether civilians or not, to “stereotypes, labels, objects.”
For those who want AWS to be abolished before it is too late to stop them, uncertainty seems to be a major obstacle: we simply do not know enough about their potentially harmful consequences to warrant preemptive action. And yet, the contingent and socially constructed nature of uncertainty means that anticipatory norm-building is no mission impossible. It is not the intrinsic feature of an issue (in terms of future/present or uncertain/certain) that can explain why norm emergence succeeds or fails; it is all about presenting an issue as if it was certain so as to elevate its salience in the eyes of decision-makers (or norm addressees, more generally).
What currently hampers progress towards preemptive regulation in the CCW, which has traditionally relied on consensus voting, is the contested nature of uncertainty. While pro-ban advocates portray AWS as unambiguously and intrinsically wrong, opponents of a ban point to the need for more research and data (“we don’t know yet”). The latter prefer continuing talks within the CCW where they can block substantial outcomes, leaving the door to AWS wide open. The degree of epistemic closure of an issue — and ultimately the success of a norm — thus result from the interaction between norm entrepreneurs and antipreneurs. Both the removal of uncertainty and its active construction are intermediate steps towards normative change (or stalemate).
It’s been clear for decades that with the advance of computers and AI the arms race would move in the dangerous direction of removing humans from real-time accountable control of weapons, because computers can react faster and endure more extreme conditions.
A decade has been lost in the CCW, much of it filled with philosophical noodling about defining autonomy. AWS and AI-driven war is now a thing that is happening in the real world.
Exploring and creating ambiguity is not helpful. We construct arms control by clarifying distinctions, not by relentlessly trying to prove that we’re clever enough to find holes in them.