What We Really Need is a Slice of Humble Pie

27 May 2015, 1116 EDT

This is a guest post by former Duck of Minerva blogger Daniel Nexon. The views that he expresses here should not be construed as representing those of the International Studies Association, International Studies Quarterly, or anyone with an ounce of sanity.

We now have a lot of different meta-narratives about alleged fraud in “When Contact Changes Minds: An Experiment in the Transmission of Support for Gay Equality.”  These reflect not only different dimensions of the story, but the different interests at stake.

One set concerns confirmation bias and the left-leaning orientations of a majority of political scientists. At First Things, for example. Matthew J. Franck contrasts the reception of the LaCour and Green study (positive) with that of Mark Regnerus’ finding of inferior outcomes for children of gay parents (negative). There’s some truth here.  Regnerus’ study was terminally flawed. LaCour and Green’s study derived, most likely, from fraudulent data. Still, one comported with widespread ideological  priors in the field, while the other did not. That surely shaped their differential reception. But so did the startling strength of the latter’s findings, as well as the way they cut against conventional wisdom on the determinants of successful persuasion.

We might describe another as “science worked.”

This narrative sometimes strays into the triumphalist: rather than exposing problems with the way political science operates, the scandal shows how the discipline is becoming more scientific and thus more able to catch—and correct—flawed studies. Again, there’s something to this. To the extent that political scientists utilize, say, experiments, then that opens up the possibility of creating fraudulent experimental data but also of uncovering such fraud.

A third, closely related to the previous one, stresses the need for broadening and deepening data access and transparency in the field. Over at The Mischief of Faction, Jonathan Ladd provides a useful reflection on this issue. Along the way, he dispenses with some other framings. As he writes, “the case raises no interesting ethical issues” and “it tells us nothing interesting about political science research methodologies.” He concludes:

[T]his leads me to full replication. This is where a new researcher collects fresh evidence to test the causal claim (or noncausal empirical pattern) in an previous project. This is the most useful type of audit of results that can happen in any evidence-based discipline. Broockman and Kalla were attempting  a full replication when they first encountered problems, a chain of events that eventually led them to discover the fraud. Full replications can catch problems in any stage of the original study, whether in data collection, coding, or analysis. It can also help expose all types of fraud. And in rare cases where the fraud can’t be proven, if the result is wrong, replications can show the result to be a clear outlier.

In conclusion, political science, like other social sciences and the natural sciences as well, is constantly trying to reduce good-faith errors, but proceeds on a presumption that no one is blatantly fabricating. As Kieran Healy points out, “Science is often bitterly competitive but it depends on honesty. It is not set up to weed out liars.” Switching to a norm where researchers didn’t trust each other’s good-faith would be very costly.

Luckily the best procedure for reducing good-faith research errors are also the best for catching bad-faith research errors: reanalysis and full replication. At the urging of Gary King and many others, political science has been moving in this direction for years. This sad event should spur us to continue working to make replication a major part of political science.

I have some reservations about whether we face a crisis of reanalysis and replication in narrative-centric research. The issues strike me as overlapping, but not perfectly or completely. Regardless, Jon does a nice job of separating out concerns about fraud, data access, peer review, and replication.

A few posts below this one, Steve Saideman focuses on a different narrative: what this is all supposed to tell us about co-authoring and mentorship. The quick answer is “not very much” and hence we should avoid retroactive fixes that address no clear systemic problem.

The point really is that fire alarm forms of oversight are largely reactive and public.  Someone notices a problem that already happened, complains, and then folks react.  That this system is in place serves as a deterrent in so far as a person’s academic career is trashed if they do something that activates the alarm.

If we used police patrol oversight–constant patrolling and monitoring–we might be better able to deter, but at the cost of much time and money (grant money for profs to accompany students while they are doing field work?)  This kind of oversight can be more quiet (or not) and can be more preventative.

There’s another version of this story that requires more discussion than I intend to provide here: how much social ties and networks still shapes allocation of academic capital in the discipline. This is a fact of life, but one worth remembering.

From my perspective, one of the most interesting meta-narratives concerns the “gap” between scholars, practitioners, and the general public. Dan Drezner writes:

If academics try too hard to demonstrate impact in their research, the incentives can get skewed. The social world is a ridiculously messy and complex place, but generating results that say “it’s complex” or “it’s complicated” or “it really depends” puts most audiences to sleep. The way to make policymakers, the public and even fellow academics sit up and take notice about research is if the findings are counterintuitive and significant. Social scientists dream of getting this kind of result. The problem comes if the dream causes them to fudge the findings.

But, I think, this understates the problem. Contra Gary King, we really do have “arguments.” Some of these arguments are better substantiated—with respect to particular standards—than others. But most of what we find is, at best, provisional. Individual studies seldom provide definitive evidence. This isn’t just a matter of better articulating confidence intervals, or taking seriously the fallibilism that many political scientists invoke when talking about epistemology, or even the potential problems posed by dealing with interactive kinds. At least in the study of international relations, we face basic issues involving the quality of data, the interpretations that go into  identifying and coding that data, the theory-laden character of how we parse it, and so forth.

Knowledge in our field is social in at least two respects. It is a product of the social contexts and processes that produce it. But it is also social in the sense that it results from the accumulation of local efforts, research, and publications.  At the same time, the basic incentive structures of academia remain individualized: acquiring status and prestige within and outside of academia, as well as getting and retaining paid positions within and outside of academia.

This comes together in pernicious ways precisely because it encourages us to push our studies as providing definitive answers to complex political dynamics. I’ve watched this dynamic start to take over academic blogging: the move from blogging as a media for engaging in ongoing conversations within and outside of the academy to a distribution platform for publicizing research. Obviously, these two activities always coexisted. And there’s little wrong with what we used to ironically term “shameless self-promotion.” But I worry that the pendulum has swung increasingly toward providing answers that, quite honestly, no individual piece of research is really capable of providing.

This solution fails, I think, to reduce to not rewarding people for “finding the incredible.” It obtains just as much in proving the banal. Back in 2010, Rob Farley and Charli Carpenter discussed the problems that obtain when political scientists try too hard to play “men in white lab coats” (video) for audiences that want them to adopt that role.  In that sense, the evolution of political-science blogging is a microcosm for ongoing risks in the field that strike me as now reaching critical mass.

At the end of the day, I’m much less concerned about a study being exposed as a fraud—an example of the system more or less working—than the accumulated dangers of deliberately or inadvertently claiming that we know more than we really do. The challenge involves balancing that danger against our advantages as scholars. These advantages derive from our methods, modes of inquiry, ability to devote ourselves to accumulating knowledge, and the (often) intellectual and everyday distance we enjoy from political practice.

I have no solution to how to meet this challenge. Except to adopt more of the same humility that we practice amongst ourselves when we communicate with “outsiders.”

These ramblings are cross-posted at Dan Nexon’s individual blog.