Some Rambling Thoughts on the Qual/Quant Pseudo-Divide

3 June 2009, 1720 EDT

Perusing Drew Conway’s excellent blog Zero Intelligence Agents in response to his comment on a previous post, I came across this post of his, reacting to Joseph Nye and Daniel Drezner’s recent bloggingheads diavlog on the theory/policy debate.

You can watch the relevant portion above, though Conway has summarized a key point:

Drezner notes that quantitative scholars tend to have a ‘imperialistic attitude’ about their work, brushing off the work of more traditional qualitative research.

To be exact, by “quantitative scholars” Drezner was referring to those who use “statistical methods and formal models” and by “traditional qualitative research” he meant specifically “more historical / deep background knowledge that’s necessary to the policymaker.” Conway goes on to concur:

In some respect I agree. As a student in a department that covets rational choice and high-tech quantitative methods, I can assure you none of my training was dedicated to learning the classics of political science philosophy. On the other hand, what is stressed here—and in many other “quant departments”—is the importance of research design. This training requires a deep appreciation of qualitative work. If we are producing relevant work, we must ask ourselves: “How does this model/analysis apply to reality? What is the story I am telling with this model/analysis?”

I’d been wanting to put in my two cents since I saw this particular bloggingheads, so I’ll just do so now. I think there are three unnecessary conflations here.

First, between qualitative or quantitative methodologies as approaches and specific methods within either of these two approaches. Drezner is comparing large-N statistical studies to historical case studies. But case study research is only one type of qualitative work – not all other types of qualitative work are any more useful for policymakers than large-N statistical studies.

Second, I see a confusion here between qualitative methods as an approach to doing social science and interpretivism as a form of theory (and for that matter, between large-N empirical studies and abstract formal modeling). In his post, Conway equates qualitative methods not with historical descriptive work, but with political theory (or as Conway puts it, political philosophy) and interpretivism. There is a wide continuum of qual methods, some much more scientifically rigorous – that is, focused on description and explanation rather than interpretation or prescription – than others. I also think that there is a similar difference between large-N statistical studies and formal modeling – one relies on data to test theories, the other relies on abstract math and logic and is largely divorced from real-world evidence.

In both cases, I think the imperialism being described above (if any) is really the imperialism of empirical science over pure theory. I think that the imperialism of quantitative methods over qualitative methods must be judged, if it exists, against only qualitative approaches that are actually designed to be scientific. Within that context, you may be surprised how much respect these scholars have for one another’s work – though, perhaps that’s just based on my good experiences collaborating and communicating with quantoids, experiences others may not share.

Third and finally, I think researchers and their methods are being conflated here. Bloggingheads.tv is perhaps most guilty by labeling this clip “quals v. quants” as if these methods are mutually exclusive and as if scholars are defined by the methods they use. (And in fact, I just noticed I did it myself in the previous paragraph with the term “quantoids.”) But most of the doctoral dissertations I see coming out today use mixed-methods – that is, some combination of case studies and statistics. And much qualitative work, including much of my own, is actually quantitative as well. It’s qualitative insofar as I’m studying text data and using grounded theory to generate analytical categories. But it’s quantitative in the sense that I convert those categories (codes) into frequency distributions that tell us something about the objective properties in the text, and in the sense that I use mathematical inter-rater reliability measures to report just how objective those properties are through inter-rater reliability measures.

Anyway, as a self-identified qualitative scholar whose work varies between interpretivism and rigorous social science studies of text (and who therefore is quite conscious of the difference), but who is also quite open to collaborating with quantitative researchers depending on the nature of the problem I’m working on, I hate to buy into a discourse that pigeonholes IR scholars as one thing or another.

Ultimately, I think the distinction Nye and Drezner are really talking about here is not methodological. Rather, it’s between those scholars capable of translating their findings (through whatever method) into language accessible to policymakers, and those who refuse to learn those skills. As I argued once before, perhaps this process of translating is “methodology” of its own that we should be incorporating into our doctoral curriculum as a discipline.