The Duck of Minerva

The Duck Quacks at Twilight

Drones, Decapitation, ISA and Impossible Strategies

February 20, 2015

Yesterday at ISA, I participated on a panel on technology and international security. One of the topics addressed was the “successfulness” of the Obama administration’s decapitation/targeted killing strategy of terrorist leaders through unmanned aerial vehicles or “drones.” The question of success, however, got me to thinking. Success was described as the military effectiveness of the strikes, but this to me seems rather wrongheaded. For if something is militarily effective, then is so in relation to a military objective.

What is a military objective?   Shortly, those objects that “by their nature, location, purpose or use make an effective contribution to the military action and whose partial or total destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.”  One may only target legitimate military objectives with permissible means. But even this requires knowing what the military advantage will be, and as such, requires a clear and identifiable strategy.

It might seem no surprise to say that for something to be successful one needs to rate “success” in relation to something. The strategy tells you what that “thing” is. “It” states the desired goal and outlines broad brushstrokes of how to get there. Yet, what in relation to targeting killing is the goal? Is it ending “terrorism?” Is it defeating al-Qaeda and its “affiliates”?

If it is the former, it is unachievable. One can never defeat an ideology or a tactic (however one wants to define terrorism). If it is the latter, we fare no better. Who we target and how the groups are identified is fluid. Thus it is still an unachievable goal.   Why does this matter?

Well, if we want to talk about the permissibility, at least morally speaking, of undertaking a war, even a “war on terror,” that seems prima facie legitimate, we must look to the classic just war principles in jus ad bellum. These include, just cause, last resort, legitimate authority, proportionality and probability of success. Many would admit that fighting terrorists is a “just cause” because it is defensive – in defense of self and others. Some might claim that the United States is a legitimate authority because it is a sovereign state. One might say too that there is no other option, and so last resort is merely satisfied (though this is highly debatable). Where we get into trouble, however, is on probability of success and proportionality.

For if we have a goal that is completely unachievable, then the likelihood of success is zero. We have no reasonable chance of defending self or others from terrorism because it is impossible to end it. If that is so, then it seems to me that the proportionality calculation can never favor going to war. The harms will outweigh the benefits, and this is purely a fact of the math. As a never-ending war, the harms will accrue indefinitely, and the benefits will decline in tandem.

Indeed, the very means by which we fight such a war, i.e. drones with hellfire missiles, actually doesn’t even enter the equation when we ask whether the targeted killing/decapitation strike strategy is evaluated in the way I have done here. For the strategy and the goal are necessarily unachievable, and thus violate two out of the five ad bellum principles. What is more, this means that such a strategy/war is impermissible because all the ad bellum principles must stand together. If one fails to satisfy one precept, the enterprise is morally corrupt. One might think this counter intuitive, as it seems a just thing to wage war against terrorists. But if one thinks of it purely in terms of means, ends, objectives, strategies and success, one sees that the strategy is necessarily unethical as it is necessarily unachievable. Drones are merely window dressing.

+ posts

Senior Research Analyst at The Johns Hopkins University Applied Physics Laboratory. Fellow at Brookings Institution and Associate Fellow at University of Cambridge Leverhulme Centre for the Future of Intelligence