One of the more interesting issues raised informally during the time I spent at the Lincoln Center’s Emerging Technologies Workshop was the relative likelihood of developments in lethal autonomous robotics leading to fully autonomous armies: that is, eliminating the human presence from battle-spaces altogether.
The general consensus is that this is unlikely, with which I am not claiming to disagree. But what fascinated me was a particular argument to this effect: that lethal robots would never be able to replace human beings as sacrifices in the name of the nation.
Two constitutive arguments underlie this claim.
First, that warrioring as risk-based sacrifice is constitutive of warfare such that states would be restrained from a shift to fully autonomous armies. (I doubt that, since this is the same logic seems not to apply to tele-operated systems, but of course we shall see if critiques of what is unfairly construed as “video-game war” do in fact end up rolling back the use of such systems.)
Second that warrioring as sacrifice is constituted by the uniquely human ability to voluntarily exhibit courage in the face of risks on behalf of one’s nation or other vulnerable others. There are plenty of reasons to be skeptical of this claim as well, but let me accept the claim and instead focus on the assumption that such an ability is uniquely human. In order for this “fact” to restrain governments and military cultures from fielding robotic armies, it would need to be perceived to be true. So the question is not “could robots experience courage as do humans?” but “can a lethal autonomous weapon be imagined to experience emotions constitutive of warrioring such as courage?”
Question to readers: if a semi-autonomous drone can be imagined to experience defensiveness, annoyance, wounded pride, wry sympathy, and even sarcastic irony how unlikely is it that genuinely autonomous lethal robots of some hypothetical future could at least be imagined to experience fear, courage or heroism?