Tag: qualitative methods

Is “Camp Qual” really our best option?

This is a guest response to Simon Frankel Pratt’s musing on methods. Lucas Dolan is a PhD Candidate at American University’s School of International Service.

In a recent contribution, Simon Frankel Pratt offers an incisive conceptual dismantling of the quantitative v. qualitative dichotomy in social science research. Pratt points out that while “quantitative’ refers to a clear community of practice centered around statistically facilitated inductive causal inference, “qualitative” lumps together several distinctive research communities. Though not all named in the post, this implicitly includes interpretivists, relational and practice turn scholars, feminists, and critical theorists of all varieties. Importantly, “qualitative” also includes small-N positivists, who share a logic of inquiry with “quantitative,” but prefer to express their knowledge claims through ordinary language. Clearly then, “qualitative” research communities differ substantially from one another in terms of scientific ontology and in the logics of inquiry they utilize, but nonetheless many of them share certain affinities as a result of being outsiders in the field.

I agree wholeheartedly with Pratt’s analyses—both regarding the incoherence of the dichotomy and of the work it performs as an expression of disciplinary power relations. It is because of this that I was so confused by Pratt’s conclusion on the “what is to be done?” side of this question.

Continue reading

Has regression analysis shrunk our imaginations?

I realize this is a weird thing for me to ask, since the vast majority of my publications–as well as a few of my works in progress–have relied on regression. But I was wondering this recently based on my own and others’ responses to a new project.

I was presenting qualitative research recently that tried to make the case for ideas mattering in a conventional security studies topic (I’m being intentionally vague). I had a lot of evidence that it did, but the way it mattered was bit more nuanced than the way material factors mattered. An audience member took issue not with my evidence, but with my interpretation; they argued it seems like this shows ideas don’t really matter at all. And I had similar thoughts while writing this paper; not that ideas didn’t matter, but that they fell short of the type of effect we were used to seeing. So I had to decide how defensive I wanted to be in discussing my results.

Continue reading

The Value Alignment Problem’s Problem

Having recently attended a workshop and conference on beneficial artificial intelligence (AI), one of the overriding concerns is how to design beneficial AI.  To do this, the AI needs to be aligned with human values, and as such is known, pace Stuart Russell, as the “Value Alignment Problem.”  It is a “problem” in the sense that however one creates an AI, the AI may try to maximize a value to the detriment of other socially useful or even noninstrumental values given the way one has to specify a value function to a machine.

Continue reading


Put a DA-RT in it

If you have been living under a rock as I apparently have, then like me you may be unaware of the DA-RT controversy that is brewing in the American Political Science Association.* Turns out that some of our colleagues have been pushing for some time to write a new set of rules for qualitative scholarship that, among other things will require “cited data [be] available at the time of publication through a trusted digital repository” [This is from the Journal Editor’s Transparency Statement, which is what is being implemented Jan. 15]. The goal I gather is to enhance transparency and reproducibility. A number of journal editors have signed on, although Jeffrey Issac, editor at Perspectives on Politics, has refused to sign onto the DA-RT agenda.

There are a number of reasons to doubt that the DA-RT agenda will solve whatever problem it aims to address. Many of them are detailed in a petition to delay implementation (which I have signed) of the DA-RT protocol, currently set for January 15, 2016. To explore how posting data is more or less an optical solution that does little to enhance transparency or reproducibility, I want to run through a hypothetical scenario for interviews, arguably the most prone of qualitative methods to suspicion.

Regardless of the subject, IRBs nearly always insist on anonymity of the interviewees. Which means that in addition to scrubbing names and identifying markers, recordings of interviews cannot be made public (if they even exist, which many IRB decisions preclude). Therein lies the central problem—meaningful transparency is impossible, and as a result reproducibility as DA-RT envisions it is deeply impaired. Even if someone were interested in reproducing a study relying on interviews, doing so would be hindered by the fact that s/he would not be able to interview the same people as the person(s) who undertook the study (this neglects of course that the reproduction interviews could not be collected at the same time, introducing the possibility of contingency effects). In this very simple and nearly universal IRB requirement, there is fundamentally nothing to stop a nefarious, ne’er-do-well academic poser from completely fabricating the interview data that gets posted to the digital database DA-RT requires because there is no way to verify it (e.g. call up the person who gave the interview and ask if they really said that?!). Continue reading


Call for Participants: Interpretive and Relational Research Methodologies

“Interpretive and Relational Research Methodologies”
A One-Day Graduate Student Workshop
Sponsored by the International Studies Association-Northeast Region
9 November, 2013 • Providence, Rhode Island

International Studies has always been interdisciplinary, with scholars drawing on a variety of qualitative and quantitative techniques of data collection and data analysis as they seek to produce knowledge about global politics. Recent debates about epistemology and ontology have advanced the methodological openness of the field, albeit mainly at a meta-theoretical level. And while interest in techniques falling outside of well-established comparative and statistical modes of inference has been sparked, opportunities for scholars to discuss and flesh out the operational requirements of these alternative routes to knowledge remain relatively infrequent.

This ninth annual workshop aims to address this lacuna, bringing together faculty and graduate students in a pedagogical environment. The workshop will focus broadly on research approaches that differ in various ways from statistical and comparative methodologies: interpretive methodologies, which highlight the grounding of analysis in actors’ lived experiences and thus produce knowledge phenomenologically and hermeneutically; holistic case studies and forms of process-tracing that do not reduce to the measurement of intervening variables; and relational methodologies, which concentrate on how social networks and intersubjective discursive processes concatenate to generate outcomes.

Continue reading


Afternoon Miscellany: Latour, Podcasts, and Big Data

This post would be much more interesting if it concerned the nexus of its three subjects. Sadly, it does not.

  1. I’m working on a forum piece with Vincent Pouliot on Actor-Network Theory (ANT) — one written from the explicit perspective of outsiders. We’ve been puzzled by the apparent lack of theorization of “the body” in Latour. For example, if social relations must be ‘fixed’ by physical objects, why isn’t the human body one such object? If any of our readers are able to weigh in, I’d appreciate it.
  2. I’ve been considering discontinuing the m4a versions of the Duck of Minerva podcast. They take much more time to produce than the mp3 versions; most people seem to listen to the mp3 versions anyway. Is there a constituency in favor of retaining the m4a variants, i.e., the ones with chapter markers and static images?
  3. Henry Farrell tweeted a paper by Gary King on setting up quantitative social-science centers. Henry highlights the section on the end of the quantitative-qualitative divide. I’m sympathetic to it: I certainly feel the pull of teaming with computational-savvy colleagues to do interesting things with “big data,” and I find myself often thinking about how it would be neat to use particular data for uncovering interesting relationships. But it also strikes me as a bit cavalier about the importance of questions — and forms of empirical analysis — that don’t fit cleanly within that rubric. Nonetheless, right on the direction that sociological and economic forces are driving social-scientific research.

Continue reading


Winecoff vs. Nexon Cage Match!

Kindred Winecoff has a pretty sweet rebuttal to my ill-tempered rant of late March. A lot of it makes sense, and I appreciate reading graduate student’s perspective on things.

Some of his post amounts to a reiteration of my points: (over)professionalization is a rational response to market pressure, learning advanced methods that use lots of mathematical symbols is a good thing, and so forth.

On the one hand, I hope that one day Kindred will sit on a hiring committee (because I’d like to see him land a job). On the other hand, I’m a bit saddened by the prospect because his view of the academic job market is just so, well, earnest.  I hate to think what he’ll make of it when he sees how the sausage actually gets made.

I do have one quibble:

While different journals (naturally) tend to publish different types of work, it’s not clear whether that is because authors are submitting strategically, editors are dedicated to advancing their preferred research paradigms, both, or neither. There are so many journals that any discussion of them as doing any one thing — or privileging any one type of work — seems like painting with much too wide a brush.

Well, sure. I’m not critical enough to publish in Alternatives, Krinded’s not likely to storm the gates of International Political Sociology, and I doubt you’ll see me in the Journal of Conflict Resolution in the near future. But while some of my comments are applicable to all journals, regardless of orientation, others are pretty clearly geared toward the “prestige” journals that occupy a central place in academic certification in the United States.

But mostly, this kind of breaks my heart:

I’ve taken more methods classes in my graduate education than substantive classes. I don’t regret that. I’ve come to believe that the majority of coursework in a graduate education in most disciplines should be learning methods of inquiry. Theory-development should be a smaller percentage of classes and (most importantly) come from time spent working with your advisor and dissertation committee. While there are strategic reasons for this — signaling to hiring committees, etc. — there are also good practical reasons for it. The time I spent on my first few substantive classes was little more than wasted; I had no way to evaluate the quality of the work. I had no ability to question whether the theoretical and empirical assumptions the authors were making were valid. I did not even have the ability to locate what assumptions were being made, and why it was important to know what those are.

Of course, most of what we do in graduate school should be about learning methods of inquiry, albeit understood in the broadest terms. The idea that one does this only in designated methods classes, though, is a major part of the problem that I’ve complained about. As is the apparent bifurcation of “substantive” and “methods of inquiry.”And if you didn’t get anything useful out of your “substantive” classes because you hadn’t yet had your coursework in stochastic modeling… well, something just isn’t right there. I won’t tackle what Kindred means by “theory-development,” as I’m not sure we’re talking about precisely the same thing, but I will note that getting a better grasp of theory and theorization is not the same thing as “theory-development.”

Anyway, I’ll spot a TKO to Kindred on most of the issues.


Challenges to Qualitative Research in the Age Of Big Data

Technically, “because I didn’t have observational data.”
Working with experimental data requires only
calculating means and reading a table. Also, this
may be the most condescending comic strip
about statistics ever produced.

The excellent Silbey at the Edge of the American West is stunned by the torrents of data that future historians will be able to deal with. He predicts that the petabytes of data being captured by government organizations such as the Air Force will be a major boon for historians of the future —

(and I can’t be the only person who says “Of the future!” in a sort of breathless “better-living-through-chemistry” voice)

 — but also predicts that this torrent of data means that it will take vastly longer for historians to sort through the historical record.

He is wrong. It means precisely the opposite. It means that history is on the verge of becoming a quantified academic discipline. That is due to two reasons. The first is that statistics is, very literally, the art of discerning patterns within data. The second is that the history that academics practice in the coming age of Big Data will not be the same discipline that contemporary historians are creating.

The sensations Silbey is feeling have already been captured by an earlier historian, Henry Adams, who wrote of his visit to the Great Exposition of Paris:

He [Adams] cared little about his experiments and less about his statesmen, who seemed to him quite as ignorant as himself and, as a rule, no more honest; but he insisted on a relation of sequence. And if he could not reach it by one method, he would try as many methods as science knew. Satisfied that the sequence of men led to nothing and that the sequence of their society could lead no further, while the mere sequence of time was artificial, and the sequence of thought was chaos, he turned at last to the sequence of force; and thus it happened that, after ten years’ pursuit, he found himself lying in the Gallery of Machines at the Great Exposition of 1900, his historical neck broken by the sudden irruption of forces totally new.

Because it is strictly impossible for the human brain to cope with large amounts of data, this implies that in the age of big data we will have to turn to the tools we’ve devised to solve exactly that problem. And those tools are statistics.

It will not be human brains that directly run through each of the petabytes of data the US Air Force collects. It will be statistical software routines. And the historical record that the modal historian of the future confronts will be one that is mediated by statistical distributions, simply because such distributions will allow historians to confront the data that appears in vast torrents with tools that are appropriate to that problem.

Onset of menarche plotted against years for Norway.
In all seriousness, this is the sort of data that should
be analyzed by historians but which many are content
to abandon to the economists by default. Yet learning
how to analyze demographic data is not all that hard,
and the returns are immense. And no amount of
reading documents, without quantifying them,
 could produce this sort of information.

This will, in one sense, be a real gift to scholarship. Although I’m not an expert in Hitler historiography, for instance, I would place a very real bet with the universe that the statistical analysis in King et al. (2008) , “Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler,” tells us something very real and important about why Hitler came to power that simply cannot be deduced from the documentary record alone. The same could be said for an example closer to (my) home, Chay and Munshi (2011), “Slavery’s Legacy: Black Mobilization in the Antebellum South,” which identifies previously unexplored channels for how variations in slavery affected the post-war ability of blacks to mobilize politically.

In a certain sense, then, what I’m describing is a return of one facet of the Annales school on steroids. You want an exploration of the daily rhythms of life? Then you want quantification. Plain and simple.

By this point, most readers of the Duck have probably reached the limits of their tolerance for such statistical imperialism. And since I am a member in good standing of the qualitative and multi-method research section of APSA (which I know is probably not much better for many Duck readers!), who has, moreover, just returned from spending weeks looking in archives, let me say that I do not think that the elimination of narrativist approaches is desirable or possible. Principally, without qualitative knowledge, quantitative approaches are hopelessly naive. Second, there are some problems that can only practically be investigated with qualitative data.

But if narrativist approaches will not be eliminated they may nevertheless lose large swathes of their habitat as the invasive species of Big Data historians emerges. Social history should be fundamentally transformed; so too should mass-level political history, or what’s left of it, since the availability of public opinion data, convincing theories of voter choice, and cheap analysis means that investigating the courses of campaigns using documents alone is pretty much professional malpractice.

The dilemma for historians is no different from the challenge that qualitative researchers in other fields have faced for some time. The first symptom, I predict, will be the retronym-ing of “qualitative” historians, in much the same way that the emergence of mobile phones created the retroynm “landline.” The next symptom will be that academic conferences will in fact be dominated by the pedantic jerks who only want to talk about the benefits of different approaches to handling heteroscedasticity. But the wrong reaction to these and other pains would be kneejerk refusal to consider the benefits of quantitative methods.


It’s All My Fault

One should not blog in anger. In an effort to make my points, I think I overstated my case and offended some people, which I did not intend to do. Wait. Isn’t that what blogging is all about? Maybe I did intend to do that.

Seriously, if I were to amend this, I would make a number of changes.

First, it is not that second-hand sources are bad and inherently inferior to primary sources. They are absolutely necessary. There is a bit of a division of labor between historians and us, as we are more often looking at the forest for the trees, or to mix ecological metaphors, trying not to get lost in the weeds. We can’t do as thorough a job as they do, especially if we take on broad subjects like Dan does. But to some degree, we need to chew our own food, especially when we are investigating micro-processes. That’s what this book was supposed to be doing, but didn’t. My point about hearsay is not about whether we do better interpretations of primary documents. It was more like a Xerox argument. The photocopy of a photocopy is worse than the original. As information becomes recycled, it loses its original meaning. And I use ‘primary documents’ liberally, not necessarily to convey the image of dusty archives. For instance, I expect Dan to have read the Edict of Nantes. And I have read the UN Charter.

Second, the book in question isn’t really the problem. I see this kind of sloppy qualitative work everywhere I look. I am very, very rarely impressed by the depth of empirical research in this business. It is always an afterthought to the theory. Books win prizes based on their first chapter, not 3-7. But that’s a problem, isn’t it?

Third, 2×2 tables, when wielded by sure hands, are fine. I have come to this conclusion after remembering that I have one in a recent piece I did…..

Fourth, you have nothing to fear from me, Stephanie. I take bribes. I am self-righteous but also corrupt. Please forward your bank info to this address in Nairobi……


It’s All Our Fault

I’ve had it. Recently I was asked to review a book that will not be named for a journal that will not be mentioned. It was by an author with a pretty good reputation with an excellent press on a subject that I am well informed on. (I won’t mention names. Dan can take him or her to the woodshed later.) I thought I would be doing the field a service and forcing myself to read what I thought would be an entertaining book that I might not otherwise have the time to read. The problem: it is f&ck*ng mess.

Actually that really isn’t the real issue. The real problem is that I am absolutely positive that this book will receive great reviews and probably win a prize. It has glowing blurbs on the back by luminaries in the field that are entirely unjustified and indicate either 1) they have not read the book, or 2) they have read the book but are friends with the author, or 3) they have not read the book, are not friends with the author but have all suffered major brain injuries within the last year. But it is the kind of thing that passes for good qualitative research in international relations right now. And that sucks.

I will be more specific, presenting what I see as the faults of this book, but which really characterize many if not most books in this vein. I will offer them in a positive light, as admonitions for young scholars to do better work, with enough profanity to capture my indignant rage and serious intent.

1) Do not be a bad historian. If you are going to do macro-historical work that relies on comparative case studies, be ready to read at least one goddamn primary document. I am really, really tired of seeing book after book that relies on secondary sources. This is academic hearsay. It is not admissible. And do not, under any circumstances, quote some historian’s conclusion as evidence for your argument. Get off your ass. Do the work. Historians’ work has all the problems that ours does. They are not the Pope.

If you write a book in international relations on a subject where the country’s official secrecy act is no longer in effect and you do not use primary sources, you have no excuse. And even if there is such a law, that probably means it is a relatively contemporary subject. People do have mouths. They can be interviewed. So unless your subject is how it feels to be part of a mass genocide or the politics of public policy towards the deaf and mute, this rule applies to you.

2) Do not be a statistician, much less a bad one. Show causality. The whole point of doing qualitative work, as opposed to statistical, is usually to trace a process. So get out your pencil and trace it. Don’t simply engage in some kind of half-assed correlative argument that this factor is present when this factor is present so you are right. We want to see not just the smoking gun, but the casings, the bullet, the body, and the hand on the trigger. This will probably require some reference to primary documents. See #1. If you ‘t do that you are just a statistician with a small N and no math skills.

3) Do not fall in love with your own ideas. This way your theory and evidence will match. Almost every book or article starts with an idea that is interesting to its originator, and the problem is that idea is a hard one to break up with. Almost any initial hunch is wrong in some way, even if there is also probably something to it. My first book looked for a common partisan alignment on foreign policy across countries. Didn’t exist. My second book looked for the role that identity played in U.S. multilateralism. None.

But it is very clear when you read a lot of academic work that that love never dies and authors will do anything to maintain that relationship. They will twist the truth, ignore obvious inconsistencies, or make excuses for their argument. Don’t do that. Marry. Get divorced. Marry new trophy spouse. Let the initial idea take you into unchartered waters and see where it takes you because that is inevitably somewhere new, but also more interesting.

4) Do a proper literature review. Make sure you have exhausted all the different ways that someone might go about explaining your explanandum and deal with them. Decisively. Do not pretend they are not there. It is rude and also lacks academic integrity.

5) Avoid the two-by-two table. It is a common joke at academic talks that all the great arguments involve two-by-two tables. I am instantly skeptical of every piece I ever read with a 2×2. 90% of the time they are terrible. I think that qualitatively-oriented academics are sensitive about the criticisms they get for lacking parsimony and generalizability and seek to armor themselves by creating simplistic typologies instead of learning math. That is stupid. Embrace context or go to stats camp.

I do both quantitative and qualitative work, but my best work is the latter. We can complain all we want, and I have, by the dominance in the field of certain ‘isms’ and methodologies, but we have to bear part of the blame. They have a point about our fuzzy conclusions and lack of rigor. We do a lot of really bad work, and we have to get better.

This has to be a personal code. The fact that I am reviewing a book with one of the best presses in the business that makes all of these mistakes indicates that there is no professional incentive to do any of this. Only Dan checks people’s footnotes. It must come from your own sense of personal integrity. But I will be watching…..


Visualizing the Human Rights Issue Agenda

Part of the research project that is keeping me too busy to blog involves capturing, coding and visualizing the issue agenda for various transnational networks. Here is a visualization of the core human rights network on the World Wide Web, circa 2008.* (Nodes represent organizational websites and ties represent hyperlinks between them. Node size corresponds to in-degree centrality within the network.)You can click for a larger view.

It looks to me as if there are really two networks here: a human rights network and a development network, all tied together conceptually under the rubric of human rights. Whether this means that the human rights movement has been colonized by the development community or vice-versa is hard to say from this.

Now here is another visualization: of the human rights issue agenda, circa 2008, as represented on the same websites.** Here, nodes represent issues; ties between nodes represent co-occurrences of the same thematic issue on the same organization’s website.

One might think, given the predominance of development organizations in the human rights network, that economic and social rights would be front and center on the overall network’s issue agenda. But no:

1) Economic and social rights (e.g. “Water” or “Health Care”) are somewhat marginalized relative to civil and political rights issues like “Repression” or “Elections” (11.5% and 26% respectively). Economic and Civil-Political rights tend to cluster with one another, suggesting a division of labor among human rights organizations.

2) A number of new rights have been articulated that effectively cut across these archetypal categories and seem to serve as bridges between the older EcoSoc/CivPol typology. 15% of the total consists of cross-cutting isuses like “Discrimination,” “Access to Information” or “Impunity.”

3) Fourth generation group rights (“Women,” “Children,” “Indigenous,” etc) are also prominently represented on the agenda (14.5%).

4) However what’s most interesting to me is the proliferation of “rights” that fit none of these categories (the white nodes). 31% of the issues on the agenda fall into the “Other” category, which is composed of roughly four types of issues: those relating to humanitarian law (“Civilians,” “Crimes Against Humanity,” “Humanitarian Intervention”) or to war more broadly (landmines, occupation, militarization); those relating to technology (“Internet,” “Bioethics”) and those referring not to human rights problems but rather to the processes activists use (“Awareness-Raising,” “Research,” “Human Rights Education”) plus a miscellaneous category that includes things like “Drugs” and the “Environment.” The biggest proportion of the “other” category, however, has to do with war and war crimes, confirming a significant blending of human rights and humanitarian law.

Other thoughts, reactions or critiques welcome. Thanks to Alex Montgomery for helping with the visualizations, and Jim Ron for helping with the code scheme.

*We identified 41 prominent human rights organizations, as operationalized using a co-link analysis tool called IssueCrawler, with the Amnesty directory, the Choike Human Rights Directory and the UDHR60 NGO links page as starting points.
**We captured mission statements and “what we do” lists from each organization and coded them at QDAP.


Beyond Qual and Quant

PTJ has one of the most sophisticated ways of thinking about different positions in the field of International Relations (and, by extension, the social sciences), but his approach may be too abstract for some. I therefore submit for comments the “Political Science Methodology Flowchart” (version 1.3b).

Note that any individual can take multiple treks down the flowchart.

Of Quals and Quants

Qualitative scholars in political science are used to thinking of themselves as under threat from quantitative researchers. Yet qualitative scholars’ responses to quantitative “imperialism” suggest that they misunderstand the nature of that threat. The increasing flow of data, the growing availability of computing power and easy-to-use software, and the relative ease of training new quantitative researchers make the position of qualitative scholars more precarious than they realize. Consequently, qualitative and multi-method researchers must not only stress the value of methodological pluralism but also what makes their work distinctive.

Few topics are so perennially interesting for the individual political scientist and the discipline as the Question of Method. This is quickly reduced to the simplistic debate of Quant v. Qual, framed as a battle of those who can’t count against those who can’t read. Collapsing complicated methodological positions into a single dimension obviously does violence to the philosophy of science underlying these debates. Thus, even divisions that really affect other dimensions of methodological debate, such as those that separate formal theorists and interpretivists from case-study researchers and econometricians, are lumped into this artificial dichotomy. Formal guys know math, so they must be quants, or at least close enough; interpretivists use language, ergo they are “quallys” (in the dismissive nomenclature of Internet comment boards), or at least close enough. And so elective affinities are reified into camps, among which ambitious scholars must choose.

(Incidentally, let’s not delude ourselves into thinking that mutli-method work is a via media. Outside of disciplinary panels on multi-method work, in their everyday practice, quantoids proceed according to something like a one-drop rule: if a paper contains even the slightest taint of process-tracing or case studies, then it is irremediably quallish. In this, then, those of us who identify principally as multi-method stand in relation to the qual-quant divide rather as Third Way folks stand in relation to left-liberals and to all those right of center. That is, the qualitative folks reject us as traitors, while the quant camp thinks that we are all squishes. How else to understand EITM, which is the melding of deterministic theory with stochastic modeling but which is not typically labeled “multi-method”?)

The intellectual merits of these positions have been covered better elsewhere (as in King Keohane and Verba 1994, Brady and Collier’s Rethinking Social Inquiry, and Patrick Thaddeus Jackson’s The Conduct of Inquiry in International Relations). Kathleen McNamara, a distinguished qualitative IPE scholar, argues against the possibility of an intellectual monoculture in her 2009 article on the subject. And I think that readers of the Duck are largely sympathetic to her points and to similar arguments. But even as the intellectual case for pluralism grows stronger (not least because the standards for qualitative work have gotten better), we should realize that is incontestable that quantitative training makes scholars more productive (in the simple articles/year metric) than qualitative workers.

Quantitative researchers work in a tradition that has self-consciously made the transmission of the techne of data management, of data collection, and the analysis of data vastly easier not only than its case-study, interpretivist, and formal counterparts but even than quant training a decade or more ago. By techne, I do not mean the high-concept philosophy of science. All of that is usually about as difficult and as rarefied as the qualitative or formal high-concept readings, and about as equally useful to the completion of an actual research project–which is to say, not very, except insofar as it is shaped into everyday practice and reflected in the shared norms of the average seminar table or reviewer pool. (And it takes a long time for rarefied theories to percolate. That R^2 continues to be reported as an independently meaningful statistic even 25 years after King (1986) is shocking, but the Kuhnian generational replacement has not yet so far really begun to weed out such ideological deviationists.)

No, when I talk about techne, I mean something closer to the quotidian translation of the replication movement, which is rather like the business consultant notion of “best practices.” There is a real craft to learning how to manage data, and how to write code, and how to present results, and so forth, and it is completely independent of the project on which a researcher is engaged. Indeed, it is perfectly plausible that I could take most of the thousands of lines of data-cleaning and analysis code that I’ve written in the past month for the General Social Survey and the Jennings-Niemi Youth-Parent Socialization Survey, tweak four or five percent of the code to reflect a different DV, and essentially have a new project, ready to go. (Not that it would be a good project, mind you, but going from GRASS to HOMOSEX would not be a big jump.) In real life, there would be some differences in the model, but the point is simply that standard datasets are standard. (Indeed, in principle and assuming clean data, if you had the codebook, you could even write the analysis code before the data had come in from a poll–which is surely how commercial firms work.)

There is nothing quite like that for qualitative researchers. Game theory folks come close, since they can tweak models indefinitely, but of course they then have to find data against which to test their theories (or not, as the case may be). Neither intepretivists nor case-study researchers, however, can automate the production of knowledge to the same extent that quantitative scholars can. And neither of those approaches appear to be as easily taught as quant approaches.

Indeed, the teaching of methods shows the distinction plainly enough. Gary King makes the point well: unpublished paper:

A summary of these features of quantitative methods is available by looking at how this information is taught. Across fields and universities, training usually includes sequences of courses, logically taken in order, covering mathematics, mathematical statistics, statistical modeling, data analysis and graphics, measurement, and numerous methods tuned for diverse data problems and aimed at many different inferential targets. The specific sequence of courses differ across universities and fields depending on the mathematical background expected of incoming students, the types of substantive applications, and the depth of what will be taught, but the underlying mathematical, statistical, and inferential framework is remarkably systematic and uniformly accepted. In contrast, research in qualitative methods seems closer to a grab bag of ideas than a coherent disciplinary area. As a measure of this claim, in no political science department of which we are aware are qualitative methods courses taught in a sequence, with one building on, and required by, the next. In our own department, more than a third of the senior faculty have at one time or another taught a class on some aspect of qualitative methods, none with a qualitative course as a required prerequisite.

King has grown less charitable toward qualitative work than he was in KKV. But he is on to something here: If every quant scholar has gone from the probability theory –> OLS –> MLE –> {multilevel, hazard, Bayesian, … } sequence, what is the corresponding path for a “qually”? What could such a path even look like? And who would teach it? What books would they use? There is no equivalent of, say, Long and Freese for the qualitative researcher.

The problem, then, is that it is comparatively easy to make a competent quant researcher. But it is very hard to train up a great qualitative one. Brad DeLong put the problem plainly in his obituary of J.K. Galbraith:

Just what a “Galbraithian” economist would do, however, is not clear. For Galbraith, there is no single market failure, no single serpent in the Eden of perfect competition. He starts from the ground and works up: What are the major forces and institutions in a given economy, and how do they interact? A graduate student cannot be taught to follow in Galbraith’s footsteps. The only advice: Be supremely witty. Write very well. Read very widely. And master a terrifying amount of institutional detail.

This is not, strictly, a qual problem. Something similar happened with Feynman, who left no major students either (although note that this failure is regarded as exceptional). And there are a great many top-rank qualitative professors who have grown their own “trees” of students. But the distinction is that the qualitative apprenticeship model cannot scale, whereas you can easily imagine a very successful large-lecture approach to mastering the fundamental architecture of quant approaches or even a distance-learning class.

This is among the reasons I think that the Qual v Quant battle is being fought on terms that are often poorly chosen, both from the point of view of the qualitative researcher and also from the discipline. Quant researchers will simply be more productive than quals, and that differential will continue to widen. (This is a matter of differential rates of growth; quals are surely more productive now than they were, and their productivity growth will accelerate as they adopt more computer-driven workflows, as well. But there is no comparison between the way in which computing power increases have affected quallys and the way they have made it possible for even a Dummkopf like me to fit a practically infinite number of logit models in a day.) This makes revisions easier, by the way: a quant guy with domesticated datasets can redo a project in a day (unless his datasets are huge) but the qual guy will have to spend that much time pulling books off the shelves.

The qual-quant battles are fought over the desirability of the balance between the two fields. And yet the more important point has to do with the viability, or perhaps the “sustainability,” of qualitative work in a world in which we might reasonably expect quants to generate three to five times as many papers in a given year as a qual guy. Over time, we should expect this to lead to first a gradual erosion of quallies’ population, followed by a sudden collapse.

I want to make plain that I think this would be a bad thing for political science. The point of the DeLong piece is that a discipline without Galbraiths is a poorer one, and I think the Galbraiths who have some methods training would be much better than those who simply mastered lots and lots of facts. But a naive interpretation of productivity ratios by university administrators and funding agencies will likely lead to qualitative work’s extinction within political science.


“Also, It Turns Out Mubarak is a Cylon.” #BSG #Egypt @RT “So Say We All!”

I was fascinated to learn while working on my Battlestar Galactica “research project” that Adama’s quote from the scene above was floating around the Internet for some time during the Egyptian Revolution. The statement “This quote now applicable to Egypt” appeared in a Reddit thread, was reposted on at least one Facebook site, quickly attracting 6,000 likes and over 1800 comments, while like-minded tweets exploded across cyberspace. This one was featured at the Huffington post:Here are some other fun examples.

The book editors for whom we’re developing this working paper asked us to look at the “intertext” between the series and political understandings in the actual world, so for our paper it was sufficient to acknowledge this phenomena.

But as a qualitative analyst I decided to take a closer, more systematic look at a sample of these comments and tweets. I was interested in the extent to which BSG metaphors engendered useful political commentary on civil-military relations – precisely what you would hope if Jutta Weldes is correct in arguing that “state action is made common-sensical through popular culture.”

I discovered something more nuanced: the answer to that question seems to depend greatly on which new media tool the data came from.

The pie charts you see below are the results of myself and a student assistant coding tweets and comments for these attributes, disaggregated by source. We analyzed comments from three sources: Twitter, Facebook and Reddit. The Reddit and Facebook comments were easy enough to capture with a little technical help – thanks Alex.

Tweets were trickier because Google doesn’t index them. Luckily my partner Stuart Shulman has invented a tool for capturing live Twitter feeds, and he happens to be sitting on a searchable archive of over a million tweets from #Cairo and #Egypt. We used his tool, DiscoverText, to search those tweets for the keywords “BSG” “Battlestar” “Galactica” and “Adama” and got back a small but interesting set of results to combine with the Reddit and Facebook comments.

DiscoverText also allows you to tag and sift through text data you gather, so last weekend we went through a total of 77 tweets, 383 Reddit comments and 966 unique Facebook comments. (The FB page says there are 1800 or so, but a lot of them are duplicates. Fortunately DiscoverText also contains a de-duping tool so we were able to eliminate those entirely.)

You can see a couple of things right away. First, you find less diversity among the tweets: they basically fall into just four code categories, whereas the range of commentary in the FB and Redidt threads is wider. But secondly, the tweets and FB comments share something in common: they are primarily composed of mindless validations of the original quote, whereas the Reddit thread contains many more original, substantive comments and even discussion.

In other words, as this bar chart makes a little clearer, the social media reaction to this quote was to retweets or write “so say we all” – similar to the practice of clicking on a form letter to a Congressperson rather than writing an original substantive remark about a political issue. However, on the Reddit thread, commenters were not only more likely to point out that it’s not clear how applicable the quote is to Egypt, but also more likely to use the quote as a jumping off point to broader discussions of Egypt, of civil-military relations, of the nuances of Adama’s messaging – in other words, far more of these were “original comments” generating discussion among commenters, rather than simple validations of the original poster’s argument. That’s pretty interesting, especially given recent claims that blogs and blog commenting are going the way of the dinosaur in favor of social media as a platform for deliberative discourse.

DiscoverText also makes it easy to drill down into specific categories of text. Of the truly original, deliberative comments (for example), you can see some interesting conversations develop. As noted, in contrast to the mindless re-tweeters, the more critical thinkers argued over the applicability of the quote to the situation in Egypt.

Not sure how this applies to Egypt since they have a separate military and police. Which, coincidentally, the military has sided with the people while the police remain loyal to Mubarak. Cool quote but nothing to do with Egypt

This quote doesn’t apply to Egypt in any way. The military forces in Egypt are mostly staffed by conscription, with mandatory service of 1-3 years for citizens (3 if you’re uneducated, 2 w/ high school degree, 1 with college degree).The protesters are cheering “We want the Army! We want the Army!” because, guess what, they are the Army.

I suggest you re-read what Adama is saying. If you think this is about “hailing the police” you are way off. Adama’s point is that there has to be a balance in the state separation of forces. That is the only thing saving the egyptians as the military appear to be unwilling to crack down on the protesters.

This led to two sets of wider conversations, one about Egypt:

The people distrust, resent and hate the police due to decades of corruption, violence and abuse of power. They have no such feelings about the military and largely regard them to be impartial, helpful and for the people. Unfortunately since Mubarak’s inflammatory speech it seems the military are actually still backing him and have also managed to position themselves very well amongst the crowds.

I think the top military commanders are being very cautious at this point just like all the Western governments because so much is up in the air. If they choose the losing side, they might pay with their lives. If the western governments choose the losing side, they might make an enemy of a very powerful player in their regional interests (Israel, Iran, etc.) and with control over the flow of oil (Suez Canal).

Better the devil we know in the current regime, a transition to full democracy will allow the popular fundamentalist Brotherhood terror group to take power. ‘I prefer to deal with the probable’ (Commander Helena Cain). This is not clear cut….. don’t be fooled like the 12 colonies.

… and one about civil-military relations.

It’s very poetic, but I think the real distinction isn’t so much about between fighting enemies and serving the people. Both in theory are actually doing that. The difference is more in the nature of the enemy: The military fights external enemies, the police, the internal enemies.

The purpose of the police has never been to serve or protect the people. They are and have always been a means by which the state can impose its will on the people. This is clear simply by reading the writings of the elites who control the state — they admit it freely. The modern myth that the police are somehow the noble champions of justice for the little man can be shattered by merely being black, or a woman, or transexual, or gay, or any other minority.

The military is expected to protect the physical borders, the police to protect agreed upon immaterial borders within the physical borders. When these rather orthogonal causes are mixed, then it’s likely you will hit a border whatever you do, then the state has become your enemy, despite both the military and the police are employed by you, the citizen.

Alternatively, some commenters discussed the origin of the quote itself, and some got off on tangents about the nature of Cylon resurrection, the value of BSG relative to Star Trek or Firefly, or how to quantify the exact nerd quotient on display in the comment thread. But the most interesting arguments to me (and perhaps to Iver Neumann and Nicholas Kiersey, who are running the BSG project) were the ones where people bickered over whether the notion of BSG as an “intertext” was valid at all: do science fiction shows as parables really help us understand real-world politics or do they merely distract?

Some quotes that received the code “It’s Just A Show”:

CLEARLY some of you losers desperately need to get a life…. or at the very least serious help from a mental health professional….. HE IS A FICTIONAL CHARACTER IN A TELEVISION SERIES NOTHING MORE..

What a load of absolute horseshit. Go and actually read about what is happening in Egypt instead of wasting your time with stuff like this.

Battlestar Galactica quotes are inappropriate for deadly-serious, real life sitatuations.

It’s must easier to accept platitudes and pop culture references than it is to think critically.

Some quotes that received the code “BSG <> World Politics”:

This is why the series was so great. It was one of the few sci-fi shows that truly reflected and touched on relevant ideas and issues of our day.

Almost every good scifi I’ve known takes real-world problems, and puts them into another light so you can look at them differently, and possibly see something entirely new. It can offer an incredible commentary on many aspects of society.

So Say We All :) … It doesn’t matter what genre or if this statement is from a real world instance- it doesn’t mean it doesn’t hold truth! And to one of the above posters- if you cannot see Cmdr. Adama’s words (however fictious) is a perfect example of what is happening in the real world- then YOU need to get a life!

I’m not sure where readers come down in this debate, but I will say that in the paper we describe a variety of ways in which shows like BSG function to mediate real-world socio-political relations: drawing on, reflecting and structuring civil-military debates, serving as a social lubricant for human security discussions across the civil-military divide, and even problematizing certain sacred cows in human security discourse. [H/T to Jason Sigger for pointing me to this exchange and this one, for example.] As we ended up arguing in the article:

“These real-world conversations – whether about US military affairs, Middle Eastern revolutions, or just warrioring – are at times infused with Battlestar Galactica references, demonstrating the show’s relevance to deliberative discourse about the civil-military relationship…”

But just how deliberative may depend on the context.

Replication data for this study is available at the Dataverse Project. Comments on our working draft are very welcome.

[cross-posted at Lawyers, Guns and Money]


Down With Negativity!

I am no expert on American political campaigns, and do not know the literature on political adverstisements. I have, however, done a fair amount of qualitative research aimed at measuring the meaning of things in a reliable, replicable way. So I’m curious to know who is using such a method to keep track of “negativity” in campaign ads?

Someone must be. Because the candidates both argued tonight not just that their opponent’s ads are perceived by others to be negative (a poll-based description of people’s impressions rather than the ads themselves) or that their opponent’s ads actually are negative (a subjective claim they can just support anecdotally) but that they know exactly how negative their opponent’s ads are.

Obama claimed that John McCain’s ads are “100% negative.” (Does he mean each ad is 100% negative, or that 100% of the ads are at least 1% negative?) Who has coded all of McCain’s ads to determine their negativity according to some reliable rubric?

McCain made even more sweeping claims: “My opponent has run the most negative campaign in history and I can prove it.” This “proof” would require not only demonstrating absolute negativity in ads but coding all comparable ads throughout American history to demonstrate relative negativity.

These are empirical (and empirically falsifiable) statements about the content of ads themselves. But neither cites their source. Who is keeping track of this, and how rigorous, I’m wondering, are the methods used? How does one measure “negativity” in ads such that coders of different political persuasions, working independently, would code the same ad the same way a reasonable amount of the time? What’s the actual definition of a negative ad, and what does the codebook look like? Which candidate came closer to being right on this question?


© 2021 Duck of Minerva

Theme by Anders NorenUp ↑