Earlier this week I posted about the classified DOD study showing a high rate of civilian harm from drone attacks. Not much else can be said about this finding by analysts until the study is de-classified, so instead let me discuss at some length the one other thing we do know: that the media has badly misreported this story.
The headline should not be “New Study: Drones Kill Ten Times More Civilians” as so many articles are claiming. It should read “DOD Analyst: Classified Study Shows Drones Likelier to Cause Civilian Harm.”
What is the difference? There are three. The first is the difference between reporting a fact that can be verified versus reporting what someone said about an otherwise unverifiable fact. The second is about the difference between civilian deaths and civilian harms, between absolute numbers and probabilities. And the third is about failing to foreground the bigger story here: government secrecy about the death toll from drones. Let’s walk through these one by one.
Reporting a Finding versus Reporting a Mention of a Finding
The story broke in a carefully worded article in The Guardian. Spencer Ackerman accurately reported a discussion with an analyst who described a classified study. But when picked up by other news outlets it was simplified and transformed – apparently without much fact-checking on the part of reporters. The result is that much of the news coverage appears to be reporting on the findings of a “new study” itself.
But the “new study” being cited and linked to by most newspapers is not the study Lewis described to Ackerman and does not itself include any findings or statistics on civilian casualties from drone attacks. That “study” is a policy paper in the journal Prism, published by National Defense University‘s Center for Complex Operations (I will call this the “Prism Report”). The Prism Report is an overview of the civilian protection gap in US military operations, not a study of the comparative incidence of civilian harm from manned/unmanned craft.
The Prism report does make an assertion that civilian casualties are higher from drone attacks based on knowledge that one of the report’s authors presumably had, but the report does not support this assertion with a footnote or provide any supporting data that might be analyzed to verify the claim. So any reporters who are using this link as a source on the actual finding are misleading the public. Indeed the article contains only a total of two citations, one to an op-ed and the other to a very significant, well documented September 2012 study on civilian casualties from drones published by the Campaign for Innocent Victims in Conflict in collaboration with Columbia Law School’s Human Rights Clinic (I will call this the “CIVIC Report”). CIVIC is an NGO I respect highly, and the 88-page “CIVIC Report” is a masterful analysis of civilian harms from drones, US military culture in Afghanistan, and what it would take to change military doctrine to better protect civilians. However this CIVIC Report also contains no analysis I could find of how drone attacks compare to manned air attacks.
So a skeptical reader might click on those links, find no substantiating evidence for the claim being made, and either assume the press is mistaken or the story is cooked up. Or a non-discriminating reader might click on those links, see an assertion without support, and take it on faith. Either tendency is bad for democracy, and the media has a responsibility to do better.
The real source for the statistic, as I think reporters have a duty to make clear, is not either of those two reports you can read but rather an interview with Dr. Lawrence Lewis, co-author of the PRISM Report, who told the Guardian and vaguely alluded to in the Prism piece that a separate study had been carried out by himself for a defense research firm using classified data. I will call this the “Classified DOD Study.” And this study, according to Dr. Lewis, has found that drones are 10 times likelier to cause civilian harm than manned air attacks.
Likelihood of Harms v. Magnitude of Deaths
This leads me to the second important set of errors in the media reporting of this story, errors not of sourcing but of substance.
First: the finding (as described by a CIVIC press release) is that drones are “more likely to harm civilians than manned aircraft.” This is being reported as “drones kill more civilians than jet fire.” But the reported finding is about civilian harm, not simply death tolls. Civilians can be “harmed” in many ways in war. They can be killed. They can be injured or maimed. They can lose family members and property. We have no idea how the Classified DOD Study measured civilian “harms,” but it is reasonable to think that of those harmed civilians, dead civilians are a smaller number than the total.
Second, according to CIVIC’s press release on the subject, Lewis reported a finding of “likelihood” of civilian harm, not total number of civilians harmed. Ten times more civilians killed means ten times more dead bodies. This is a critical difference. The statistic tells us nothing about how many total civilians are harmed much less killed in either type of attack – though the executive summary to the report states that total numbers of casualties are actually about the same for drones and manned attacks. What has been claimed is that the likelihood of some civilian harm in a specific incident is greater on average with drones.
The Real Story: Government Secrecy
If so, that’s pretty important. How valid is this finding? How was it generated? As I wrote before, we don’t know and can’t know anything about any of this because it is all classified. Not only is the data classified, so is the entire report (though you can read the executive summary here). To verify and assess that finding, to incorporate it into our understanding of the humanitarian value of drones, the public needs the US Government to declassify the study itself at a minimum, and preferably the data itself for replication purposes.
The government’s failure to release the information is the real story here – one not getting a lot of play in the media discussion. (Bloggers are doing better).
On the one hand, it is good news that the Obama Administration is actually analyzing the civilian impact of its drone operations (though it needs to replicate this study outside of Afghanistan to see if the finding is generalizable). But if the government is sitting on a data-set of body counts with relevance to the debate over its drone program, it is important that it allow experts and the public access to its findings at a minimum, if not the data itself, in the service of a more informed public debate.
Unfortunately ‘body count’ data (i.e. information on the illegal killing of persons outside the United States) will only be revealed by a Wikileaks-type organization. Does anybody recall Iraq any more? ‘We don’t do body counts.’ Enter Wikileaks. What is worrying about the political science/ir debate on these questions is the willingness to critique/throw to the wolves people like Assange and Snowden and still demand data like this be released, based on a totally groundless faith in U.S. politics.
I’m not actually demanding the data be released. I think there are often very good reasons for keeping data itself confidential. Researchers like me do this all the time with sensitive human subjects data, for example. But what I think should be released is the report, along with a general description of what the data looks like, how it was gathered, coded and analyzed, and how the findings were generated. So people can evaluate them. All that is possible without releasing any sensitive data itself.
Is ‘the classified study’ the study for which we have the executive summary? I thought it was for some reason, I guess because Larry Lewis is involved in both.
Correct. The executive summary is unclassified, the study itself is including the explanation of methods is classified.
Two clarifications from the unclassified summary:
First, it’s definitely casualties, not harms (I don’t know why CIVIC went with “harm”); losing a relative or property, as you suggest might be the case above, wouldn’t have been recorded.
Second, the per “engagement” rate is the number of engagements in which civilian casualties are recorded; the per “incident” rate is the number of civilian casualties when an engagement with any civilian casualties occurs. So the summary essentially says that when something happens, it’s about as bad for both regular pilots and drone pilots; but it happens much more often.
What we don’t know from the unclassified summary is the overall rate for either.