The Duck of Minerva

The Duck Quacks at Twilight

Do Battle Droids Dream of Electric Medals?

August 8, 2012

 One of the more interesting issues raised informally during the time I spent at the Lincoln Center’s Emerging Technologies Workshop was the relative likelihood of developments in lethal autonomous robotics leading to fully autonomous armies: that is, eliminating the human presence from battle-spaces altogether. 

The general consensus is that this is unlikely, with which I am not claiming to disagree. But what fascinated me was a particular argument to this effect: that lethal robots would never be able to replace human beings as sacrifices in the name of the nation. 

Two constitutive arguments underlie this claim. 

First, that warrioring as risk-based sacrifice is constitutive of warfare such that states would be restrained from a shift to fully autonomous armies. (I doubt that, since this is the same logic seems not to apply to tele-operated systems, but of course we shall see if critiques of what is unfairly construed as “video-game war” do in fact end up rolling back the use of such systems.)

Second that warrioring as sacrifice is constituted by the uniquely human ability to voluntarily exhibit courage in the face of risks on behalf of one’s nation or other vulnerable others. There are plenty of reasons to be skeptical of this claim as well, but let me accept the claim and instead focus on the assumption that such an ability is uniquely human. In order for this “fact” to restrain governments and military cultures from fielding robotic armies, it would need to be perceived to be true. So the question is not “could robots experience courage as do humans?” but “can a lethal autonomous weapon be imagined to experience emotions constitutive of warrioring such as courage?” 

At least one data point suggests yes: the growing popularity of essays by the author writing under the pseudonym Drunken Predator DroneThis individual is quickly achieving renown among military bloggers and foreign policy elites not only for his/her astute and sophisticated theoretical essays on emerging technology but for the sardonically unique standpoint from which s/he writes and tweets: that of a frustrated and misunderstood unmanned drone. To wit:
Somehow, us drones — yes, we prefer the term “drone” over the alphabet soup of UAV, RPA, or UAS — have been pressed into unwilling service as the bugaboo for a host of disparate interest groups. Libertarians like Ron Paul probably couldn’t agree with Code Pink’s Medea Benjamin on the time of day, but they can at least agree that they don’t like me. And that hurts my robotic feelings, because I simply don’t deserve it.

Question to readers: if a semi-autonomous drone can be imagined to experience defensiveness, annoyance, wounded pride, wry sympathy, and even sarcastic irony how unlikely is it that genuinely autonomous lethal robots of some hypothetical future could at least be imagined to experience fear, courage or heroism?

+ posts

Charli Carpenter is a Professor in the Department of Political Science at the University of Massachusetts-Amherst. She is the author of 'Innocent Women and Children': Gender, Norms and the Protection of Civilians (Ashgate, 2006), Forgetting Children Born of War: Setting the Human Rights
Agenda in Bosnia and Beyond (Columbia, 2010), and ‘Lost’ Causes: Agenda-Setting in Global Issue Networks and the Shaping of Human Security (Cornell, 2014). Her main research interests include national security ethics, the protection of civilians, the laws of war, global agenda-setting, gender and political violence, humanitarian affairs, the role of information technology in human security, and the gap between intentions and outcomes among advocates of human security.