Tag: drones (Page 1 of 2)

Kill Webs: The Wicked Problem of Future Warfighting

The common understanding in military circles is that the more data one has, the more information one possess.  More information leads to better intelligence, and better intelligence produces greater situational awareness.  Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.

Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept.  Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails.  We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.

Continue reading


Drones, targeted killings and the limitations of international law

Last month’s announcement that a Royal Air Force drone was used to kill two British citizens in Syria has reignited debates about the legality of targeted killings, but there is always a danger that something gets lost within this legal frame. Questions about the geographical boundaries of contemporary conflict and the legal status of those being targeted are clearly important and should not be ignored but we should also be aware that other equally important issues are being pushed to the margins of debate. As I argue in my recent article for International Political Sociology, the rather dry, disembodied and technical language of international law tends to ignore the pain and suffering experienced by those targeted and the detrimental effects drone operations are having on the communities living below. As such, these legal debates have failed to contest the notion that this technology provides a more efficient, more effective and more humane way of waging war.

One of the reasons that this incident has caused such a stir is that it is the first time that the British have used a drone to carry out an extra-judicial killing. In a statement to the House of Commons last month, David Cameron confirmed that a British drone had been used to carry out a deadly attack in Syria despite the fact that MPs had previously voted against military operations against Bashar al-Assad. The victims –Reyaad Khan, Ruhul Amin and a third unidentified man – were killed when their car was hit as it travelled through the northern city of Raqqa. They were targeted, Cameron argued, because Khan was plotting a series of ‘barbaric attacks against the West’ and ‘actively recruiting ISIL sympathisers’ to carry them out. The Secretary of State for Defence, Michael Fallon, provided some additional details the following day, telling the BBC that months of ‘meticulous planning [and] careful surveillance’ had gone into this attack and that the government ‘wouldn’t hesitate to do it again’. Indeed, he went on to suggest that the British might adopt a US-style hit list, prompting a fierce rebuke from human rights groups.

Continue reading


Drones, Decapitation, ISA and Impossible Strategies

Yesterday at ISA, I participated on a panel on technology and international security. One of the topics addressed was the “successfulness” of the Obama administration’s decapitation/targeted killing strategy of terrorist leaders through unmanned aerial vehicles or “drones.” The question of success, however, got me to thinking. Success was described as the military effectiveness of the strikes, but this to me seems rather wrongheaded. For if something is militarily effective, then is so in relation to a military objective.

What is a military objective?   Shortly, those objects that “by their nature, location, purpose or use make an effective contribution to the military action and whose partial or total destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.”  One may only target legitimate military objectives with permissible means. But even this requires knowing what the military advantage will be, and as such, requires a clear and identifiable strategy.

Continue reading


U.S. Options Limited Due to Will and Not Lack of Drones

Today, Kate Brannen’s piece in Foreign Policy sent mixed messages with regard to the US-led coalition fighting the Islamic State (IS).  She reports that the US is balancing demands “For intelligence, surveillance, and reconnaissance (ISR) assets across Iraq and Syria with keeping an eye on Afghanistan”. The implication, which the title of her piece implies, is that if the US just had more “drones” over Syria, it would be able to fight IS more adeptly.   The problem, however, is that her argument is not only misleading, it is also dismissive of the Arab allies’ human intelligence contributions.

While Brannen is right to note that the US has many of its unmanned assets in Afghanistan and that this will certainly change with the upcoming troop draw down there, it is not at all clear why moving those assets to Syria will yield any better advantage against IS. Remotely piloted aircraft (RPA) are only useful in permissive air environments, or an environment where one’s air assets will not face any obstructions or attacks. The US’s recent experience with its drone operations abroad have been mostly all permissive environments, and as such, it is able to fly ISR missions – and combat ones as well – without interference from an adversary.   The fight against IS, however, is not a permissive environment. It may range from non-permissive to hostile, depending upon the area and the capabilities of IS at the time.   We know that IS has air defense capabilities, and so these may interfere with operations.   What is more, we also know that RPAs are highly vulnerable to air defense systems and are inappropriate for hostile and contested air spaces. NATO recently published a report outlining the details of this fact.   Thus before we claim that more “drones” will help the fight against IS, we ought to look very carefully at the operational appropriateness of them.

A secondary, but equally important, the point in Brannen’s argument concerns the exportation of unmanned technology. She writes,

“According to the senior Defense Department official, members of the coalition against the Islamic State are making small contributions in terms of ISR capabilities, but it’s going to take time to get them more fully integrated. U.S. export policy is partly to blame for the limits on coalition members when it comes to airborne surveillance, Scharre said. ‘The U.S. has been very reluctant to export its unmanned aircraft, even with close allies.’ ‘There are countries we will export the Joint Strike Fighter to, but that we will not sell an armed Reaper to,’ [Scharre] said.”

The shift from discussing ISR capabilities to exportation of armed unmanned systems may go unnoticed by many, but it is a very important point. We might bemoan the fact that the US’s Arab partners are making “small [ISR] contributions” to the fight against IS, but providing them with unarmed, let alone armed, unmanned platforms may not fix the situation. As I noted above, they may be shot down if flown in inappropriate circumstances.   Moreover, if the US wants to remain dominant in the unmanned systems arena, then it will want to be very selective about exporting it. Drone proliferation is already occurring, with the majority of the world’s countries in possession of some type of unmanned system. While those states may not possess medium or high altitude armed systems, there is worry that it is only a matter of time until they do. For example, arming the Kurds with Global Hawks or Reapers will not fix this situation, and may only upset an already delicate balance between the allies.

Proliferation and technological superiority remain a constant concern for the US. Which is why, taken in conjunction with the known limitations of existing unmanned platforms, there has not been a rush to either export or move the remaining drone fleet in Afghanistan to Syria and Iraq. IS is a different enemy than the Taliban in Afghanistan or the “terrorists” in Yemen, Pakistan or Somalia.  IS possess US military hardware, they are battle hardened, have a will to fight and die, and are capable of tactical and operational strategizing. Engagement with them will require forces up close and on the ground, and supporting that kind of fighting from the air is better done with close air support. Thus it is telling that the US is sending in Apache helicopters to aid the fight but not moving more drones.

ISR is of course a necessity. No one denies this. However, to claim that this can only be achieved from 60,000 feet is misleading. ISR comes from a range of sources, from human ones to satellite images.  Implying that our Arab allies are merely contributing a “small amount” to ISR dismisses their well-placed intelligence capabilities. Jordan, for example, can provide better on the ground assessment than the US can, as the US lacks the will to put “boots on the ground” to gather those sources.  Such claims also send a message to these states that their efforts and lives are not enough. When in fact, the US is relying just as heavily on those boots as they are relying on our ISR.



Tobias Gibson Reviews The Thistle and the Drone

Editor’s Note: This is a guest post by Tobias Gibson of Westminster College.

In recent days, there have been reports of U.S. drone strikes in North Waziristan, Pakistan. According to the New York Times article, these strikes killed at least two people. This remote area of Pakistan has long been subject to U.S. drone strikes.

The Times also reports that U.S. anti-terrorism efforts are shifting theaters from Afghanistan and Pakistan to Africa. This shift includes the expansion of the use of surveillance drones in Mali, flown from a new drone base in Niger. According to the story, the U.S. is partnering with France “to track fighters affiliated with Al Qaeda and other militants” (my emphasis). One of the points of the article is that the U.S. needs to acquire knowledge about local conditions. According to Michael R. Shurkin, a former CIA analyst who is now at RAND, “Effective responses… require excellent knowledge about local populations and their politics, the sort of understanding that too often eludes the U.S. government and military.” Without understanding local conditions, the author contends, the introduction of drones “runs the risk of creating the type of backlash that has undermined American efforts in Pakistan.”

In a post this week, Charli Carpenter discusses evidence that the civilian death count from drones has been drastically underestimated. She argues that if the death counts are higher than publicly estimated, any humanitarian argument about the use of drone as “precision” weapons “goes out the window.” (Side note: those interested in drones and the continued mechanization of war and security should read her (gated) article “Beware the Killer Robots.”)

All of these recent stories should lead to a more profound appreciation of Akbar Ahmed’s recent book The Thistle and the Drone. Ahmed has a simple, yet profound thesis: “it is the conflict between the center and the periphery and the involvement of the United States that has fueled the war on terror.” According to Ahmed, this conflict has played itself out for centuries, as evidenced by European efforts to “civilize” tribes throughout the world in their colonies, the U.S. efforts to in the west to pacify and relocate indigenous tribes, and current efforts by Russia to end separtist violence in Chechnya… and, Ahmed would argue, those discussed above in Pakistan and Mali. The drone is merely the newest weapon in the center’s arsenal.

Continue reading


What We Know, Don’t Know, Can’t Know and Need to Know About the DOD’s Classified Study on Drone Deaths

Reaper drops first precision-guided bomb, protects ground forcesJust before Independence Day, an analyst for a defense research agency stated in a media interview that a classified DoD study shows that drones are likelier to cause civilian harm than attacks from manned fighters. Lawrence Lewis, a researcher for the Center for Naval Analyses, says these findings resulted from a  statistical analysis he conducted using classified data from Afghanistan mid 2010-mid 2011 as part of a project funded by DoD’s Joint Center for Operational Analysis.

If true, this would dramatically shift the discussion about the humanitarian impact and value of armed drones. There are all kinds of human security arguments against drones – they make war likelier, they kill too many civilians, they weaken the rule of law; and all kinds of national security arguments in favor of them – they decapitate terror networks, they prevent attacks, they keep troops out of harm’s way. But there is also a human security argument in favor of drones: that when used lawfully they save foreign civilian lives relative to other kinds of strikes because they are precision weapons.

But if drones are likelier to harm civilians than manned attacks (the explanation is that drone pilots lack the training in humanitarian law and civilian protection that manned pilots have) then that goes out the window.

So, is the finding valid? Continue reading


The “Fear” Factor in “Killer Robot” Campaigning

robotlogoOne of the more specious criticisms of the “stopkillerrobots” campaign is that it is using sensationalist language and imagery to whip up a climate of fear around autonomous weapons. So the argument goes, by referring to autonomous weapons as “killer robots” and treating them as a threat to “human” security, campaigners manipulate an unwitting public with robo-apocalyptic metaphors ill-suited to a rational debate about the pace and ethical limitations of emerging technologies.

For example, in the run-up to the campaign launch last spring  Gregory McNeal at Forbes opined:

HRW’s approach to this issue is premised on using scare tactics to simplify and amplify messages when the “legal, moral, and technological issues at stake are highly complex.” The killer robots meme is central to their campaign and their expert analysis.

McNeal is right that the issues are complex, and of course it’s true that in press releases and sound-bytes campaigners articulate this complexity in ways designed to resonate outside of legal and military circles (like all good campaigns do), saving more detailed and nuanced arguments for in-depth reporting. But McNeal’s argument about this being a “scare tactic” only makes sense if people are likelier to feel afraid of autonomous weapons when they are referred to as “killer robots.”

Is that true? Continue reading


Obama, Drones, and the Matter of Definitions

Editor’s Note: This is a guest post by Tobias T. Gibson, an associate professor of political science and security studies at Westminster College in Fulton, Mo. 

In the buildup to President Obama’s speech at National Defense University on May 23, the administration suggested that the speech would clarify US policy on the use of drones in targeted killing. Although the president took pains to describe the limitations set forth by his administration, the speech provided little genuine clarity.

The working definitions of three very important words play a key role in undermining the putative “transparency” provided by the speech.  In a key passage, the President states that

Beyond the Afghan theater, we only target al Qaeda and its associated forces. Even then, the use of drones is heavily constrained. America does not take strikes when we have the ability to capture individual terrorists – our preference is always to detain, interrogate, and prosecute them. America cannot take strikes wherever we choose – our actions are bound by consultations with partners, and respect for state sovereignty. America does not take strikes to punish individuals – we act against terrorists who pose a continuing and imminent threat to the American people, and when there are no other governments capable of effectively addressing the threat. And before any strike is taken, there must be near-certainty that no civilians will be killed or injured – the highest standard we can set. [emphasis mine]

These three key constraints on the administration may amount to very little in the way of genuine barriers to the use of drone strikes.

Continue reading


What’s Wrong With This Picture?


This graph comes to you from a newly published article on the politics of the drone campaign published this week in International Studies Perspectives. I haven’t yet read the full piece so cannot yet comment on it substantively or theoretically. Nor have I looked closely at the authors’ code-book. However based on the abstract, the analysis appears to rest on the empirical evidence of a newly coded dataset (the latest of many out there presuming to calculate the percentage of civilians – v. non-civilians – killed in drone strikes) to make claims about the justifiability of such attacks – presumably by weighing civilian harms against military effectiveness. My reaction here pertains solely to this graph, and what strikes me is the disjuncture between the authors’ coding of “civilians,” and the actual definition of civilians in the 1977 1st Additional Protocol to the Geneva Conventions.

Continue reading


Field Reports

I spent last week doing “field research” – that is, participant-observation in one of the several communities of practice whose work I’m following as part of my new book project on global norm development. In this case, the norm in question is governance over developments in lethal autonomous robotics, and the community of practice is individuals loosely associated with the Consortium on Emerging Technologies, Military Operations and National Security. CETMONS is an epistemic network comprised of six ethics centers whose Autonomous Weapons Thrust Group collaboratively published an important paper on the subject last year and whose members regularly get together in subset to debate legal and ethical questions relating to emerging military tech. This particular event was a sponsored by the Lincoln Center on Applied Ethics, which heads CETMONS and held at the Chautauqua Institution in New York.

There among the sailboats (once a game-changing military technology themselves), smart minds from law, philosophy and engineering debated trends in cyber-warfare, military robotics, non-lethal weaponry and human augmentation. Chatham House rules apply so I can’t and won’t attribute comments to anyone in particular, and my own human subjects procedures prevent me from doing more than reporting in the broadest strokes about the discussions that took place in my research, foreign policy writing or blog posts. Nor does my research methodology allow me to say what I personally think on the specific issue of whether or not autonomous lethal weapons should be banned entirely, which is the position taken by the International Committee on Robot Arms Control and Article 36.org, or simply regulated somehow, which seems to be the open question on the CETMONS-AWTG agenda, or promoted as a form of uber-humanitarian warfare, which is a position put forward by Ronald Arkin.* 

However, Chatham House rules do allow me to speak in generalities about what I took away from the event, and my methodology allows me to ruminate on what I’m learning as I observe new norms percolating in ways that don’t bleed too far into advocacy for one side or the other. I can also dance with some of the policy debates adjacent to the specific norm I’m studying. And I can play with the wider questions regarding law, armed conflict and emerging technologies that arise in contexts like this. 

My posts this week will likely be of one or the other variety. 
*Not, at least, until my case study is completed. For now, regarding that debate itself, I’m “observing” rather than staking out prescriptive positions. My “participation” – in meetings like these or in the blogosphere or anywhere else these issues are discussed – is limited to posing questions, playing devil’s advocate, writing empirically about the nature of ethical argument in this area, exploring empirical arguments underlying ethical claims on both sides of that debate, clarifying the applicable law as a set of social facts, and reporting objectively on various advocacy efforts.

“Invisible” Wars?

World Politics Review has a feature section in this issue on the “invisibility” of contemporary US wars, fought through covert ops, drone strikes and cyber attack rather than on conventional battlespaces. The issue is a thought-provoking read: Thomas Barnett aims a verbal fusillade at Obama’s “one-night-stand” foreign policy; scalding expositions on the illegality and perverse side effects of drone strikes come from Michael Cohen and Micah Zenko, respectively; and Steven Metz confirms the new “invisibility” of US military strategy.

Naturally, my contribution unpacks the whole notion of “invisible” war, putting it into its socio-political context:

Much digital ink has been spilled over how cyber and unmanned technologies are changing the nature of war, allowing it to be fought more secretly, more subversively and with greater discretion. But the single biggest shift in the sociology of war in the past quarter-century has been not in the way it is fought, but in the relationship between its grim realities and the perceptions of those on the home front. Indeed, it is precisely the increasing visibility of ordinary warfare due to communications technology that is driving U.S. efforts to redefine the rules of engagement. And ironically, this is resulting in an unraveling of old normative understandings about how to achieve human security.

Check out the whole set of essays here.


China’s Counter to the Asian ‘Pivot’ (2): ‘Swarms’ in the Pacific

Part one is here, where I noted China’s growing fear of encirclement (I get Chinese students a lot who talk about this). So, in the role of China, I argued for an Indian charm offensive to prevent encirclement, and how China might buy off Korea from the US camp by abandoning North Korea. Here are some more ‘B-Team’ style ideas for pushing back on US local dominance, including swarming the US navy in the western Pacific with cheap drones and missiles:

3. Build missiles and drones; don’t bother with a navy.

I’m not a big hardware guy, but it should be pretty obvious that trying to ‘out-ship’ the Americans one-to-one in the western Pacific (as the Kaiser tried to do against Britain before WWI) would be a ridiculously costly fool’s errand. Japan’s failed effort to dominate vast Pacific in the 1940s is a good object lesson in how hard that is and how the Americans will fight tooth-and-nail to prevent it. It makes far more sense to pursue an ‘access-denial’ approach in the medium-term, and China, unlike Iran in the Straits of Hormuz, actually has the money and technology to attempt this. China should pursue regional (East Asian) dominance first (as Mearsheimer has argued for a decade), and then tangle with the Americans over the much larger game of the Pacific later.

So access-denial – making it harder and harder for US and allied navies to operate west of Guam (the so-called second island chain strategy) – is a good first step. Throwing swarms of cheap rockets and drones against hugely expensive, slow-moving US carriers is vastly cheaper, fights asymmetrically where the US hegemon is weak, looks less threatening (defensive balancing), and can be marketing as defending Asia against US interventionism. And stick with robots and missiles. They’re getting very cheap and increasingly outclass human platforms. Planes that don’t carry pilots can stay aloft longer and project further, hovering over the battlespace for long hours. I just reviewed an essay for an SSCI journal on whether carriers in the Pacific will be obsolete in two decades (the author’s answer was probably). So let the Americans go on buying fewer and ever more expensive ships and planes costing mountains of money – and then ‘swarm’ them with masses of super-cheap missiles and drones. (On the issue of America’s tendency to buy few and expensive platforms instead of many and cheap, try this.)

4. Buy European debt.

Unless China switches to internal consumption soon, it will continue to rack up currency reserves from OECD states. Buying OECD sovereign debt is a great way to get leverage over those economies. And buying Euros is especially useful.

First, it pressures the US by reminding Americans of China’s leverage over the US budget. It reminds Americans that China can take its money elsewhere, and that a nasty US budget crunch would ensue a real rupture with China. Nothing fuels American hysteria so much as the idea that China ‘owns’ the US or something. Buying Euro-debt drives up US interest rates and keeps America fretting that it needs to be nice to its ‘banker’ and all that. Conversely, if China refuses to put its savings than into anything other than US T-bills, expect the the US to play tougher.

Second, buying Euro-debt helps keep the Europeans out of any tangles between China and the US camp in Asia. Just as European dependence on Russian oil threatens to neutralize the EU in the long struggle over Eastern Europe’s post-Cold War course, so a dependence on Chinese finance is a method to handicap NATO grandstanding about Asia. Besides, what else should China do with the money? Buy even more debt from the US? At some point, getting so vested in US T-Bills threatens China, because so much of its wealth is in one place, its possible strategic competitor.

5. Keep propping up troublemakers like Sudan and Zimbabwe.

Nothing distracts American policy-makers like upstart little countries that have the nerve – the nerve! – to stick their finger in the eye of the US. Don’t they recognize American exceptionalism! Witness Iran, Iraq, Venezuela, Cuba… And nothing convinces the US to waste mountains of money on unnecessary defense procurement and pointless conflicts like these guys. So if you’re China, propping up local baddies is great tool. Yes, it makes you look like you’ll support anyone (which is true, of course, because you’re nasty communist oligarchs after all who couldn’t care less about the Darfuris). But the benefits – wasteful military spending plus American hysteria and imperial overreaction, leading to consequent global unease with American power – more than outweighs the costs. Anything to keep the Americans saying crazy, patently ridiculous stuff like ‘Iran is a mortal threat to the US,’ reckless talk that scares the whole planet and alienates the developing world where 4/5 of the world’s population lives. Encourage the US to dissipate its energies in the periphery while the rest of the world worries that otherwise good ideas like the ‘responsibility to protect,’ e.g, is really neocolonialism, because American just can’t help itself. If American comes off as a revisionist hegemon that can’t help but pursue rogue states, China looks restrained by comparison.

Cross-posted at Asian Security Blog.


Targeting…targeting: What are reasonable expectations?

Blue moon, you targeted me standing alone…

Yesterday Charli wrote a post on whether or not those opposed to the use of drones should use the concept of “atrocity law” instead of “war crimes” or human rights violations.

I wonder if others who generally oppose “targeted killings” think the concept of “atrocity law” might be a more useful way of framing this problem publicly than talking about “war crimes” or “human rights” specifically – concepts that by their nature draw the listener’s attention to a legal regime that only partially bears on the activity in question and invites contrasting legal views drawn from contrasting legal regimes.

Charli asks this question given that:

I think there is significant and mounting evidence of normative opposition to the targeted killings campaign (regardless of arguments some may make about its technical legality under different legal traditions), and according to even the most conservative estimates it meets the other criteria of a significant number victims and large-scale damage. No one can doubt it’s highly orchestrated character.

I’m going to go with “no” on these questions. First, unlike Charli, I’m not certain there is “mounting evidence of normative opposition to the targeted killings campaign” in anything other than the protests of a relatively insular group of legal-academics-activists (Phil Alston et al) who tend to be critical of these kinds of things anyway. In previous posts I have raised doubts about whether or not we can determine if targeted killing is effective, and how some activities have challenged and changed legal framework for the War on Terror. However, if anything, I think there is growing consensus within the Obama administration that the program works, it is effective and I think it is popular.

Additionally, I do not see how invoking the term “atrocity” will get us beyond many of the political problems involved in invoking other terms like “human rights law” or “war crimes”. If anything, “atrocity” seems to be an even less precise, more political term.

However, I think this conversation points to a third, larger issue that Charli is mostly concerned with – civilian death in armed conflict. Or, to put it another way – What expectations may we reasonably seek to place on our states when they carry out military actions? Those who write, research and teach on international law typically anchor their discussions in the legal principles of proportionality, necessity and distinction. However, these are notoriously vague terms. And, as such, when it comes to drones, many argue that these legal principles are being undermined.

In thinking about this question, I’ve been reminded of the recent controversy over the decision of the International Criminal Tribunal of the former Yugoslavia in the Gotovina Case. In it, the Court ruled that a 4% error rate in targeting in a complex military operation was tantamount to a war crime. Four percent.

Was this a reasonably conclusion for the ICTY to make? Are militaries (and the military in question here was not a Western military dealing with high-tech military equipment) really expected to do better than a 96% accuracy rate when it comes to targeting? And if so, on what grounds can we (or the Court) say this is the case? And, bringing this back to Charli’s post, would we benefit from thinking about a 4% error rate in terms of “atrocity”?

There are two very good summaries of the case at Lawfare and IntLawGrrls for more background information on the case. Some concerned former military professionals (many of whom are now professors) – admittedly, another insular group of legal-academics-activists of a very different source – have put together an Amicus Brief for the Gotovina Appeal which is well worth reading.

However, immediate questions of legality aside, I think this raises a larger question as to what we can reasonably expect from military campaigns, especially what levels of accuracy. Are all civilian deaths “atrocity”? Historically, the laws of war have said no – that proportionality may sometimes render it permissible (if no less regrettable). And I believe that all but the most ardent activists would agree with this historically rooted position. But it is clear that our perceptions of reasonable death rates have changed since the Second World War. So the question is what governs our ideas about proportionality and civilian deaths in an age of instant satellite imagery, night vision and precision guided weaponry? Unfortunately, I’m not sure the drone debate has given us any useful answers nor the basis to produce them.

I appreciate that there are important differences here – the military is, in theory, a hierarchical chain of command that is obliged to follow the laws of war. The CIA (who carries out the drone program) are civilians who do not meet these expectations and their status in law is questionable. But status here is not the issue (at least for this blog post and how it relates to Charli’s concerns). Instead, it is whether and at what point civilian deaths may be considered “atrocity”, on what basis we can and should make that decision and whether that language would make any useful or practical difference.

There is no doubt that recent move to a “zero-civilian death” or high expectations of few casualties has been rapid. Certainly it is at least part of the increased legal activity by governments, IGOs and NGOs in the realms of international law and the laws of war. However, I think it is also the result of a false promise that better technology can allow us to have “clean” wars. It is a promise that is made by governments to their populations, but one that has also clearly influenced activists in terms of their expectations – whether they are set in terms of laws, rights or atrocity.


Blind Men, Elephants and Drones

There are many good reasons to read David Scheffer’s All the Missing Souls: its insider story of the war crimes trials, written by one of their architects, the in-depth case studies on Bosnia, Rwanda, Cambodia, the impassioned defense of international justice as a moral idea.

But the part I found most useful is in the post-script, where Scheffer re-articulates his theory that we need a new concept to describe the types of crimes dealt with by the emerging law of war crimes tribunals, because these types of “mega-crimes” fall at the intersection of a patchwork of broader legal regimes, leading to conceptual confusion:

“Diplomats, jurists, lawyers and journalists describe the law governing atrocities as international criminal law, international humanitarian law, international human rights law, or the laws and customs of war.

Yet none of these long-standing fields of international law suffice to describe comprehensively the legal framework of the war crimes tribunals. No term describes how the crimes being prosecuted should be described collectively. Where is the term that easily and accurately describes the totality of these crimes so that both dialogue and action can coexist?

Why care? Because clarify and cimplicity in law promote public understanding and support for the pursuit of justice… if we can discover such a term, it might help avoid the paralysis that overtakes bureaucracies mired in definition hunting rather than effective responses to evil.” – Scheffer, p. 425

I like this venn diagram because it’s a useful teaching tool for distinguishing legal traditions that often confuse students. But I’ve also been thinking about whether Scheffer’s insight – meant to help analytically with a set of crimes that are already being prosecuted – can also help us think about public debates about whether certain actions constitute potentially judiciable crimes.

For example, how might this come to bear on the central problem in the targeted killings debate: the fact that no single legal regime holistically covers the kind of international legal problem posed by the US’ assassination campaign abroad? For law of armed conflict types, the question is whether the weapons themselves violate the law (I think not) and/or are being used by the appropriate actors (it depends), consistently with the principles of proportionality (it depends). From a human rights law perspective, these are summary executions plain and simple, but it’s tricky because if war law applies (I think not) then some of these provisions are arguably murkier. In terms of humanitarian law, which also outlaws summary executions of both civilians and noncombatant soldiers even in time of war, a question is whether the targets are non-combatants when targeted (I think generally yes). Since commentators can bicker over which of these regimes applies, the question of international criminal culpability, either by specific pilots or by national leaders rarely comes up.

How would the concept of “atrocity law” help? Does the US’ “targeted killings” campaign constitute the type of international crime that could be construed as falling unquestionably at this intersection? I don’t know but I’ll punt this to readers: have a look at Scheffer’s definition (pp 429-430):*

“These are high-impact crimes of severe gravity and a highly orchestrated character, that shock the conscience of humankind, result in a significant number of victims or large-scale property damage, and merit an international response to hold at least the top war criminals accountable under the law.”

The definition is vague and subjective of course, but that is part of its utility, at least as a means of popularizing the core of this emerging legal nexus in the media and popular discourse (a nexus that lacks a single codified body of law itself). A piece of that core is the notion (which as precedent in the Martens Clause of the 1899 Hague Convention) that a measure of what makes certain things unlawful when their technical legality is contested is the extent to which they “shock the collective conscience.”

I think there is significant and mounting evidence of normative opposition to the targeted killings campaign (regardless of arguments some may make about its technical legality under different legal traditions), and according to even the most conservative estimates it meets the other criteria of a significant number victims and large-scale damage. No one can doubt it’s highly orchestrated character.

I wonder if others who generally oppose “targeted killings”** think the concept of “atrocity law” might be a more useful way of framing this problem publicly than talking about “war crimes” or “human rights” specifically – concepts that by their nature draw the listener’s attention to a legal regime that only partially bears on the activity in question and invites contrasting legal views drawn from contrasting legal regimes. Perhaps we can use Scheffer’s concept to combat the “blind men and the elephant” problem with respect to the drone debate. Thoughts?

*As a disclaimer I should say I am less enthused about Scheffer’s more specific criteria for this concept, which in a number of ways don’t mesh well with this general definition and defeat his goal of simplifying by introducing unhelpful redundancies.

**I recognize that not all readers of this post will fall into this category.


Robotic Planes: Harbinger of Robotic Weapons?

LA Times‘ latest article on drones raises the spectre of “robot weapons” in relations to the X-47B, Northrup Grumman’s new drone prototype with the ability to fly solo – part of an ongoing force restructuring as the US military cuts back significantly on human personnel.

While one might well ask whether a robotic plane (i.e. one that can fly autonomously) constitutes a robotic weapon if a human is in the loop for any targeting decisions, what’s notable about this narrative in press coverage is that the increasing autonomy of non-lethal systems is certainly being constructed as a harbinger of a slippery slope to a world of fully autonomous weapons systems (AWS). Anti-AWS campaigner Noel Sharkey is quoted in the article:

“Lethal actions should have a clear chain of accountability,” said Noel Sharkey, a computer scientist and robotics expert. “This is difficult with a robot weapon. The robot cannot be held accountable. So is it the commander who used it? The politician who authorized it? The military’s acquisition process? The manufacturer, for faulty equipment?”

And this is the first press coverage I’ve seen that invokes the evolving position of the ICRC on the topic:

“The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,” committee President Jakob Kellenberger said at a recent conference. “The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.”

Indeed, ICRC President Jakob Kellenberger‘s keynote address during last year’s ICRC meeting on new weapons technologies in San Remo suggest that legal issues pertaining to autonomous weapons are indeed at least percolating on the organization’s internal agenda now, as opposed to previously. Thinking ahead to norm development in this area – the interest of a key player in the arms control regime signals an emerging trend in that direction – it’s worth having a look at the entire relevant text from that speech by Kellenberger:

Automated weapon systems – robots in common parlance – go a step further than remote-controlled systems. They are not remotely controlled but function in a self-contained and independent manner once deployed. Examples of such systems include automated sentry guns, sensor-fused munitions and certain anti-vehicle landmines. Although deployed by humans, such systems will independently verify or detect a particular type of target object and then fire or detonate. An automated sentry gun, for instance, may fire, or not, following voice verification of a potential intruder based on a password.

The central challenge with automated systems is to ensure that they are indeed capable of the level of discrimination required by IHL. The capacity to discriminate, as required by IHL, will depend entirely on the quality and variety of sensors and programming employed within the system. Up to now, it is unclear how such systems would differentiate a civilian from a combatant or a wounded or incapacitated combatant from an able combatant. Also, it is not clear how these weapons could assess the incidental loss of civilian lives, injury to civilians or damage to civilian objects, and comply with the principle of proportionality.

An even further step would consist in the deployment of autonomous weapon systems, that is weapon systems that can learn or adapt their functioning in response to changing circumstances. A truly autonomous system would have artificial intelligence that would have to be capable of implementing IHL. While there is considerable interest and funding for research in this area, such systems have not yet been weaponised. Their development represents a monumental programming challenge that may well prove impossible. The deployment of such systems would reflect a paradigm shift and a major qualitative change in the conduct of hostilities. It would also raise a range of fundamental legal, ethical and societal issues which need to be considered before such systems are developed or deployed. A robot could be programmed to behave more ethically and far more cautiously on the battlefield than a human being. But what if it is technically impossible to reliably program an autonomous weapon system so as to ensure that it functions in accordance with IHL under battlefield conditions?

When we discuss these new technologies, let us also look at their possible advantages in contributing to greater protection. Respect for the principles of distinction and proportionality means that certain precautions in attack, provided for in article 57 of Additional Protocol I, must be taken. This includes the obligation of an attacker to take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental civilian casualties and damages. In certain cases cyber operations or the deployment of remote-controlled weapons or robots might cause fewer incidental civilian casualties and less incidental civilian damage compared to the use of conventional weapons. Greater precautions might also be feasible in practice, simply because these weapons are deployed from a safe distance, often with time to choose one’s target carefully and to choose the moment of attack in order to minimise civilian casualties and damage. It may be argued that in such circumstances this rule would require that a commander consider whether he or she can achieve the same military advantage by using such means and methods of warfare, if practicable.

Three initial reactions, more later as I follow this issue for my book-manuscript-in-progress this Spring:

First, a distinction is being drawn in the legal discourse between “automated” and “autonomous” weapons, suggesting to me that the ICRC sees a soft and hard line here, one that is being obscured in the media and popular discourse. How this will play out in an efforts to apply humanitarian law to these new systems will be interesting to see.

Second, Kellenberger acknowledges the counter-claim that autonomous systems might have advantages from a war law perspective (this argument being put forth most famously by Georgia Tech’s Ronald Arkin). This suggests that the ICRC is far from taking a stance on whether or not these weapons should be pre-emptively banned, as some claim, and as blinding lasers were previously. Instead they are still listening and observing. It will be interesting to see how this debate develops among humanitarian law elites.

Third, I’m glad to see Kellenberger focusing on the question of discrimination, but it should be pointed out that the concept of discrimination in IHL is more than simply about whether distinction between civilians and combatants is possible, but also whether a system is controllable by humans once deployed – whether its effects can be limited. Anti-AWS advocates are certainly making the case that they may not be, and existing humanitarian law provides them some legal leverage to develop that argument if they choose – even if it is shown that such weapons are highly discriminate.


Bad Predator?

Peter Singer has an op-ed in the Times which carefully makes the case against drones by carefully putting forth the proposition that their use undermines democracy:

What troubles me, though, is how a new technology is short-circuiting the decision-making process for what used to be the most important choice a democracy could make… We must now accept that technologies that remove humans from the battlefield, from unmanned systems like the Predator to cyberweapons like the Stuxnet computer worm, are becoming the new normal in war. And like it or not, the new standard we’ve established for them is that presidents need to seek approval only for operations that send people into harm’s way — not for those that involve waging war by other means… WITHOUT any actual political debate, we have set an enormous precedent, blurring the civilian and military roles in war and circumventing the Constitution’s mandate for authorizing it.

Well, as least this is a better argument than the other barbs against drones – the ones that focus on the weapons themselves as somehow uniquely offensive in terms of war law. (Last year, Lina Shakhouni and I bombed that set of arguments back to the stone age.)

But Singer narrows in on a different thread in this debate: that certain weapons are a game-changer not because they are useful, but because of how the conditions under which they are used affect our sense of how war is to be conducted, what it is, and who decides. It’s an interesting set of arguments.

But is it any better in terms of the causal claims on which it rests? Dissenting views are rolling in. The Atlantic’s Joshua Foust writes:

We should be criticizing Congress, not remote-controlled airplanes, for limitless militarism. Congress ceding all authority on lethal operations to the president is indeed a grave threat to democracy, but drones are only one tool the president uses to kill people. The bigger problem is that he was given the authority to do that.

Indeed, at Wings Over Iraq, Starbuck points out that the argument is older than the weapons system – the claim that remote-control weaponry facilitated devil-may-care foreign policy is at least as old as the Tomahawk Missile:

Though much ink has been spilled on “drone ethics“, these strikes are little removed from 1990s-era “Tomahawk diplomacy”. Though modern drones can loiter over the battlefield for hours–even days–at a time, and can hit small, mobile targets, they’re just another precision stand-off weapon. P.W. Singer’s op-ed might specifically target drones, he’s making a broader point that standoff weapons–missiles, drones, even computer viruses–might make warfare more common in the 21st Century.

As I’ve already written, I agree with Foust and Burke that “drones” are not problems in themselves but have become a synechdoche for a broader tension between the current security environment and the legal frameworks through which we’re accustomed to thinking about and legitimizing war. And I also agree with Singer that that tension is genuine and needs to be addressed (for example, by updating the War Power Act – something within Congressional control).

But is this mismatch between norms and policy bringing about the specific political outcomes he (and others) claim – especially the idea that drones cause a democratic deficit? As a social scientist I remain unconvinced, and want to see more than rhetorical arguments. In fact besides the claim Burke identifies above, I think Singer posits a number of additional causal claims about the political impact of stand-off weapons in his piece, all plausible but insufficiently backed up. He also posits some perhaps unsustainable claims about the relationship of democracy to war (though one might hope that democracy might stand on its own as a value to be preserved)… and especially, I think it’s a little fuzzy in his argument what aspect of “democracy” is really most at stake here and why.

Let’s think through the claims, and I’ll return in future posts to assessing them:

1) Proposition #1: Stand-off weapons make armed conflict easier and therefore likelier. For one thing this is a different dependent variable – war may be a public bad, but more more war doesn’t by itself undermine democracy. Also, I want someone to show me that on balance the number of militarized interstate disputes is increasing as a result of stand-off technologies, or that countries with access to these technologies are likelier to be involved in MIDs, controlling for other factors.

2) Proposition #2: Stand-off weapons are likelier to be used in ways that lead to a blurring of civilian/military roles. Civilian supremacy is a cornerstone of democracy as we know it, and certainly there has been some fudging of the civ-mil divide in recent decades, and certainly the use of CIA drone operatives is a good example of that, but can the blame really be rested at the doorstep of these weapon systems? And how would we know? Among other things, Singer’s own earlier writing on private military firms suggests this problem is not limited to stand-off weaponry…

3) Proposition #3: The availability of stand-off weapons increases the likelihood that democratic leaders will circumvent democratic deliberation about the use of armed force. It does seem to be happening in this case, but again is the technology causing this problem or simply making an old trend especially obvious?

4) Proposition #4: Democratic deliberation reduces the likelihood of militarized interstate disputes. Again, please, let’s treat this as a hypothesis rather than an assumption. I will have more to say about how convincing it is after I revisit the more recent democratic peace literature with my doctoral students this term but my sense as a political scientist is that this was always simplistic at best and has been problematized further by some new studies.

5) Proposition #5: Citizens’ and policymakers’ estimate of physical risk from war to the nation’s own citizens is a moderating influence over war initiation decisions. Makes intuitive sense, but I know too much about the strategies nations use to trick citizens into war to take this at face value. How true is this proposition in broad terms? If I wanted to find out, I’d probably look to compare democracies that did and did not have a conscription policy to find out whether institutionalized risks of war to citizens lead countries to be more risk-prone internationally, other things being equal. But I find myself doubting it, since historically war has declined along with conscription as a practice, so I wonder how this is presumed to work…

Point is, none of these propositions are obviously true or uncontroversial. I expect to explore several of them in more detail on the basis of the empirical studies I dig up in the next weeks. Readers: can you suggest sources, studies or other ways of testing these hypotheses to guide me as I dig? Or other testable propositions underlying the drone debate?


Anwar al-Awlaki and Targeted Killing: A quick, first, and uneasy reaction

*post written with comments from fellow duck Ben O’Loughlin

The world media is reporting that Anwar al-Awlaki has been killed in Yemen – although details are very sketchy at this point.

It is very clear to me that Awlaki was not a particularly nice person – he advocated some rather terrible things (even before 9/11 supposedly radicalised him). His followers have been certainly linked to terrorism, including the Fort Hood shooting.

However, I must admit that I am somewhat troubled by this turn of events. Earlier this year I suggested that the targeted killing of bin Laden was acceptable under international law. He’s been linked to the financing and organising of terrorist attacks around the world and this was well established before his death.

But I have yet to see any reports that suggest that Awlaki has been tied to any material support for terrorist attacks. I think this changes the legal game substantially. It essentially is suggesting that *we* (whoever that is) are now targeting people for their ideas rather than they are actually doing. Pushed to its logical extreme, a person might unintentionally inspire others to commit violent acts. Should they be eliminated?

I’m no fan of Awlaki and I will certainly not mourn his passing, (really – he seems like a total jerk) but this raises serious questions about the targeted killing program, who is being targeted and why. Presumably, in the case of targeted killing, its important there is evidence BEFORE the killing, rather than a scrabble now to piece together a case, after the fact.

I hope there is evidence that he actually materially supported terrorism.

Edit: Will McCants has linked to an article at Foreign Policy from November 2010 which argues the case for taking out Awlaki. I still have mixed feelings about this. I will feel better if there is a case/dossier of evidence that can be brought forward – and I still maintain that this case should have been made before striking out at him. 


Crunching Drone Death Numbers

The Monkey Cage has published a detailed guest post by Christine Fair on the drone casualty debate. Fair takes leading drone-casualty-counters (Bergen and Tiedeman’s New America Foundation database and new numbers out from the Bureau of Investigative Journalism) to task for their methodology, in particular focusing on their sourcing:

While these methodologies at first blush appear robust, they don’t account for a simple fact that non-Pakistani reports are all drawing from the same sources: Pakistani media accunts. How can they not when journalists, especially foreign journalists, cannot enter Pakistan’s tribal areas? Unfortunately, Pakistani media reports are not likely to be accurate in any measure and subject to manipulation and outright planting of accounts by the ISI (Pakistan’s intelligence agency) and the Pakistani Taliban and affiliated militant outfits.

Pakistani journalists have readily conceded to this author that perhaps as many as one in three journalists are on the payroll of the ISI. In fact, the ISI has a Media Management Wing which manages domestic media and monitors foreign media coverage of Pakistan. Even a prominent establishment journalist,Ejaz Haider, has questioned “What right does this wing have to invite journalists for ‘tea’ or ask anyone to file a story or file a retraction? The inquiry commission [to investigate the death of slain journalist Shehzad Saleem] should also look into the mandate of this wing and put it out to pasture.”

Pakistani journalists have explained to this author that, with respect to drone strikes, either the Pakistani Taliban call in the “victim count” or the ISI plants the stories with compliant media in print and television—or some combination of both. In turn, the western media outlets pick up these varied accounts. Of course the victim counts vary to give the illusion of authenticity, but they generally include exaggerated counts of innocents, including women and children. Of course as recent suicide bombings by females suggest, women should not be assumed innocent by virtue of their gender.

Thus, these reports mobilized by NAF and BIJ, despite the claims of both teams of investigators, cannot be independently verified. At best, their efforts reflect circular reporting of Pakistani counts of dubious veracity.

I think this is a really important analysis and share Fair’s concerns about the reliability and validity of these methods. I haven’t looked closely at BIJ’s dataset, but I’ve written previously about not only the sourcing problem, but also coding anomalies and conceptual problems with the NAF methods. The Jamestown Foundation, which has another drone-casualty dataset that Fair doesn’t address, has its own problems.

All that said, having made a clear case that we can’t really verify the numbers, I think it’s very strange that Fair arrives at the conclusion, by the end of her article, that drones must therefore be a pro-civilian technology:

U.S. officials interviewed as well as Pakistani military and civilian officials have confirmed to this author that drones kill very few “innocent civilians.” Indeed, it was these interviews that led me to revise my opinion about the drone program: I had been a drone opponent until 2008. I now believe that they are best option.

It’s hard to argue with her claims that drones might be more discriminate than ‘regular airstrikes,’ an argument that largely resets on her observation that the drone program is more highly regulated and this would be obvious to the public if the CIA didn’t have a variety of incentives to keep mum about the details. But in the absence of good data comparing the kill ratios – which we really don’t have for non-drone-strikes either – it’s hard to make this case definitively. Also, relative to what? A law-enforcement approach that involved capturing and trying terrorists rather than obliterating them might or might not be more ‘pro-civilian’ – though it would certainly be more costly in terms of military life and assets. We simply don’t know.

Regarding how we might know, I also don’t buy Fair’s argument that attempts to verify the civilian status of victims through interviews would be fallacious. This view is based on the assumption that the important question is whether a victim was actually a terrorist. But since extrajudicial executions of suspected criminals without due process is a violation of human rights law, that’s not the right question to ask. From a war law perspective, the right question is whether the individual was directly engaged in hostilities at the time of the attack.

Now, Stephanie may well jump in with some more nuance on percolating developments in war law, the notion that direct participation is being expanded in some ways to include off-duty terror leaders, etc; I agree this is a complex grey area in the law but it’s beyond the scope of this post. My point here is that the messiness of this debate is directly related to the conceptual muddiness introduced by the shift toward thinking about the combatant/civilian divide in terms suspects/non-suspects rather than participation/non-participation. Fair’s analysis is only another example of that.

I am not anti-drone. As I wrote earlier this year, I think there’s a fair amount of unwarranted hype about them, and that the real problems are how they’re used, not the nature of the weapons itself. But this tendency to use the drone discussion to legitimize a reconceptualizing of the very definition of ‘legitimate target’ is extremely problematic. And while I support Fair’s argument that some independent mechanism should be established for determining casualty counts and disaggregating the civilian from the combatant dead, I do not share her belief that this is impossible or that it hinges on the distinction between ‘innocent’ and ‘guilty’ – concepts that require due process to determine. If such fact-finding were done – and they should be, as the Oxford Research Group argues – I would support a coding method that reflects actual humanitarian law, not current US policy.

Here is another perspective for those following the issue. The Wall Street Journal also has a piece out today on the topic.


Targeting Targeted Killing

I was asked to step-in at the last minute to write a chapter on targeted killing for a textbook on isses in the War on Terror. Given the recent OBL killing and debate about raids, etc, I was surprisingly excited at the prospect of engaging with the issue.

Although my chapter is almost done (no really, Richard, it’s on its way!) I’ve noticed some problems with researching the topic and trying to draw general conclusions as to whether or not it is a good or a bad policy.

1.What are you people talking about?

When talking about “targeted killing”, everyone means something different. Some are talking about assassination (Michael Gross for example), some specifically are talking about the Israeli policy used against alleged Palestinian militants post-November 2002 (such as Steven David); some are talking about the targeting of terrorist leaders generally (decapitation in Audrey Kurth Cronin’s book How Terrorism Ends). Nils Melzer on the other hand seems to be talking about every kind of state killing in and out of warfare from the CIA in Vietnam, to US tactics against Gaddafi in the 1980s to Israel-Palestine post-2000.

And yet all of these things are radically different policies from each other. While decapitation refers to the removal of the leadership of a group, Israel’s policy targeted anyone who was seen as part of the upper-to-middle management of terrorist organizations. It’s not just the leadership that was targeted, but the bomb-makers, planners, etc. The US drone policy seems to target “militants” generally and is done in the context of ongoing armed conflict (although I concede this is up for debate). Whereas the OBL raid was clearly targeting just OBL.

Yet many (like Dershowitz in this post here or Byman here) conflate ALL of these kinds of killing where it is convenient for his/her argument. For example, shorter Dershowitz: the US has killed Osama, ergo Israel’s tactics are legitimate. Leaving the legitimacy issue aside for a moment, these operations were two INCREDIBLY different things. You simply can’t compare one to the other – which leads me to my next point…

2. Israel-Palestine is crazy sui generis

To put it mildly, the Israel-Palestinian situation is unlike any other situation in the world. Basically, you have a well-armed democratic country in a state of confused hostilities with an internationally recognized movement (with some branches that engage in politically violent acts) directly beside it that is engaged in a struggle for independence. This is pretty much the opposite of the United State’s drone tactics in the Af-Pak region, where drones are being controlled from far away (military bases or mainland USA) against territories that are also far away to combat a threat that is, again, far away.

To draw conclusions from one and to apply it to the other simply does not make any sense. The policies are carried out in very different ways, justified very differently (Israel has a process involving courts, political figures, etc; the US president seems to be the sole authorizing force on many of the attacks against militants/terrorists). Comparing targeted killing apples and drone oranges doesn’t really seem to work.

And yet, almost all of the work on targeted killing from which assessments are made has been based on Israel’s policy in Palestine. The three major studies I can find are: Kaplan, et al. 2005; Hafez and Hatfield, 2006; Mannes 2008.

The one exception I have is the Cronin book, How Terrorism Ends where she also looks at the policy of targeting and killing militants in the Philippines and Russia. As a popular-ish book, it doesn’t go into a lot of methodological detail, but just states what happened to various movements/organisations after their leaders were killed. (Cronin is also sceptical that it works though she does admit of the Israeli policy that it may have saved some Israeli lives.)

So, while it might be the only model we have decent statistics on, but I don’t think the Israeli policy of targeted killing is appropriate one for building a comprehensive argument on targeting leaders generally.

3. Assessment of effectiveness requires counterfactual history

Many of the studies above make assessments of the Israeli-Palestinian policy by saying that it basically has no effect whatsoever. Statistics don’t lie, I suppose. But I can’t help feeling that something is missing here. While these studies don’t show a significant decrease in attacks, they don’t show a significant increase either. Who knows what would have happened without the policy. There could have been more attacks. There could have been fewer attacks. It could have stayed the same. The problem that defenders and detractors of targeted killing encounter is that we don’t really know what would have happened otherwise. So drawing conclusions about success/failure seems to necessarily involve guessing what would have or would not have happened when it reality we don’t actually know and have to rely on assumptions and guesswork.

In summary, it seems to me that 1) there is a dearth of evidence from which to draw reasonable conclusions 2) the policies are so different that a comparison is impossible – as is the extension of the lessons of one case study to another.

In this case I wonder if such policies should be justified (David, 2003) or denounced (Stein 2003; Gross 2003 and 2006) on a normative basis. For example, David justifies the policy as fulfilling a need for revenge (which he sees as morally justifiable) and Gross argues against because the use of collaborators in gathering the necessary intelligence is immoral.

This isn’t to say that quantitative studies on the issue are useless – on the contrary, we desperately need more information. But to me this seems to be a case where a discussion of morality may actually be more effective than discussing an almost impossible to measure effectiveness – at least for the immediate future.

I would be most grateful for any suggestions of further qual/quant studies on the topic from Duck readers. (I see that CATO has a speciall issue out on the US and targeted killing. However as it does not appear that it will be fully uploaded until 13 June, I’m kind of out of luck for my chapter and this post.)

« Older posts

© 2021 Duck of Minerva

Theme by Anders NorenUp ↑