Talking Academic Journals: Collecting Data

22 April 2013, 1015 EDT

Note: this is the first in what I hope will be a series of posts opening up issues relating to journal process for general discussion by the international-studies community.

Although many readers already know the relevant information, let me preface this post with some context. I am the incoming lead editor of International Studies Quarterly (ISQ), which is one of the journals in the International Studies Association family of publications. We are planning, with PTJ leading the effort, some interesting steps with respect to online content, social media, and e-journal integration–but those will be the subject of a later post. I have also been rather critical of the peer-review process and of the fact that we don’t study it very much in International Relations.

The fact is that ISQ by itself–let alone the collection of ISA journals and the broader community of cognate peer-reviewed publications–is sitting on a great deal of data about the process. Some of this data, such as the categories of submissions, is already in the electronic submission systems–but it isn’t terribly standardized. Many journals now collect information about whether a piece includes a female author. Given some indications of subtle, and consequential, gender bias, we have strong incentives to collect this kind of data.

But what, exactly, should we be collecting?

Demographic Data: To begin with, it strikes me that any data we collect about authors should also be collected for reviewers. If we want to better understand the effect–or lack thereof–of categorical attributes on the peer-review process, then we need to know about reviewers.

I am pretty confident that we should be collecting more granular data about the gender of authors. Whether one of the authors is a woman provides important data, but there’s no reason we can’t output, for example, MFM or FFF for tri-authored papers. Following current ISA conference guidelines, the specific author query is likely to look something like “Female/Male/Other/Decline to Answer.” Indeed, because demographic data can be sensitive, these efforts require a “decline to answer” option.

Other straightforward data–which we tend to have anyway–includes citizenship and/or country of residence. My initial thought was that we should collect race and ethnicity data, but I’m starting to see how daunting this endeavor will be for journals with significant non-American submissions. Both the kinds of answers, and their meanings, vary from country to country. Journals could develop conditional survey questions, but that doesn’t solve the fundamental problem of what options to provide for which answers. For example, should US citizens receive 2010 US census options?

And what about LGBTQ data? As astute readers will have noted, ISA currently allows an “other” option for conference registrants. But this doesn’t necessarily provide the best option for those who self-identity in non-heteronormative terms.

Citation Data: The field has started to make use of the wealth of data provided by citations in published papers, but what about unpublished papers? This would provide interesting information about the field in general, and in might tell us something important about differences among initial submissions, published pieces, and pieces that don’t survive the peer-review process.

Oddball Data: Does the local time of submission correlate with reviewer decisions, e.g., is there a “haven’t eaten lunch effect” that might be discernable underneath admittedly problematic data? What about other temporal factors, such as time of year? Does formatting correlate with publishing outcomes? There seem to be some interesting options here.

What do you all think?