Policymakers Just Don’t Understand

4 January 2011, 1401 EST

Erik Voeten, one of my colleagues at Georgetown, writes at the Monkey Cage that:

International relations, and especially (inter)national security, is the subfield of political science where the gap between policy makers and academics is most frequently decried. This is not because political science research on security is less policy relevant than in other subfields. Quite the contrary, it is because political science rather than law or economics is the dominant discipline in which policy makers have traditionally been trained. In short: there is more at stake.

Erik takes as his point of departure an exchange between Justin Logan and Paul Pillar at the National Interest (itself riffing on a forum surrounding Michael Mosser’s “Puzzles versus Problems: The Alleged Disconnect between Academics and Military Practitioners”).

I understand complaints that much IR scholarship does not seem relevant to the kind of questions policy-makers are struggling with. Yet, incessant complaints about the rigor or difficulty of scholarly work reveal more about policy-makers than about academia. IR theory is for the most part not very hard to understand for a reasonably well-trained individual. The possible exception is game-theoretical work, which constitutes only a small percentage of IR scholarship. My bigger worry is that foreign policy decision makers are avoiding any research using quantitative methods even when it is relevant to their policy area. There is a real issue with training here. My employer, Georgetown’s school of foreign service, at least requires onev quantitative methods class for masters students (none for undergrads). Many other schools have no methods requirement at all. By comparison, Georgetown’s public policy school requires three methods classes. It is not obvious to me why those involved in foreign policy-making require less methods training for their daily work. The consequence is, however, that we have a foreign policy establishment that is ill-equipped to analyze the daily stream of quantitative data (e.g. polls, risk ratings), evaluate the impact of policy initiatives, and scrutinize academic research.

I agree with Erik that policy students lack sufficient methodological training, but disagree with his sole focus on quantitative training. Policy makers are poorly equipped to deal with the daily flood of qualitative data they confront–including data best described as ethnographic, discourse-analytic, and narrative in character. They also need to better understand key social-scientific concepts, particularly those involving cultural phenomena.

Once we move beyond the relatively easy case that foreign-affairs students need better comprehensive methodological training, we really do confront a basic disconnect: many IR scholars–whether in Security or International Political Economy (IPE)–don’t adequately understand the difference between “policy implications” and “policy relevance.”

Showing that, for example, trade interdependence lowers the chances of war has clear policy implications, but it isn’t all that relevant to the specific challenges faced by policy makers.  No careful US decision-maker would ignore Chinese military power (or vice versa) because two states with market economies are less likely, in a statistical sense, to go to war with one another. Or, to take a different kind of example, Erik’s path-breaking account of how UN security council approval enhances the legitimacy of the use of force also has policy implications, but isn’t all that directly useful to policymakers.

Indeed, we need to be careful about the tenor of these arguments, which represent something of a spillover from the journalists-need-to-listen-to-Americanists genre so popular over the last two years. Journalists, and others charged with “making sense” of current events to the public, would do well to pay more attention to political science, sociology, history, and other disciplines. Much of what passes for journalist accounts of, say, electoral outcomes, amounts to repeating back knowledge gained from privileged access to elite conversations [and, to circle back to the need for better methodological training, from polls they don’t know how to interpret]. But those conversations themselves usually constitute flawed “standard stories” (PDF) about political causation.

Or, even worse, they rely on intellectually flabby pundits and “deep thinkers” better skilled at constructing pithy phrases, public marketing, and appearing on television than providing coherent analysis.

Policymakers, of course, also benefit from this kind of “making sense” — and academic knowledge can, and should, play a larger role in that process. But let’s not kid ourselves about the policy relevance of much of academic international studies. And in this context I personally worry more about the danger of the flawed study that makes it through peer review (but that would never, ever, ever happen, right?) and influences policy debates. As an academic based in DC — and one with a small amount of policy experience — I’ve seen firsthand how the lure of “making a splash” via “policy relevant” research distorts the production of academic knowledge. It isn’t a pretty sight for anyone involved.

Finally, I think academics underestimate the degree to which the policy apparatus already has, more or less, in-house academics (in the intel community, for example) that do the range of stuff we do, only with access to classified information. More connections between these de facto academics and de jure academics would probably benefit policymaking, insofar as it improves the range of methods, the implementation of methods, and the diversity of findings making an impact on the production of state analytic knowledge.