This brief post started life as a comment on a Facebook discussion thread about peer reviewing practices but I thought it might deserve a wider readership. The question was raised: is it kosher for a journal editor to request information about good reviewers from the author of the manuscript? The general consensus, with which I agree, was that it’s fine to request those names because editors are always looking for qualified reviewers, and the author’s list might provide names that the editor might not have thought of. Of course, the editor need not be bound by that list, and shouldn’t be, but sometimes people in a subfield (or a sub-subfield) know the specific lay of their part of the intellectual landscape better than an editor does.
That said, this is the kind of thing that can be easily abused, as people game the system and give names of people who are most likely to give their manuscript a thumbs-up. And the peer review system is a creaky beast in any event, so I thought I’d lay down a few imperative commands to journal editors, mainly in order to provoke discussion but also to summarize in concise form my own experience both as a journal editor and as an author and reviewer:
In his latest post, PTJ moves us past the worst critiques of “rational choice theory” and focuses on a few more nuanced concerns.1 I’m glad to see the conversation progressing, and this type of exchange is one of the things I love most about academic blogging. However, I find some of PTJ’s arguments problematic.
Another day, another piece chronicling problems with the metrics scholars use to assess quality. Colin Wight sends George Lozano’s “The Demise of the Impact Factor“:
Using a huge dataset of over 29 million papers and 800 million citations, we showed that from 1902 to 1990 the relationship between IF and paper citations had been getting stronger, but as predicted, since 1991 the opposite is true: the variance of papers’ citation rates around their respective journals’ IF [impact factor] has been steadily increasing. Currently, the strength of the relationship between IF and paper citation rate is down to the levels last seen around 1970.
Furthermore, we found that until 1990, of all papers, the proportion of top (i.e., most cited) papers published in the top (i.e., highest IF) journals had been increasing. So, the top journals were becoming the exclusive depositories of the most cited research. However, since 1991 the pattern has been the exact opposite. Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing. Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ IFs.
If the pattern continues, the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.
According to a new survey I’ve just completed, not great. As part of my ongoing research into human security norms, I embedded questions on YouGov’s Omnibus survey asking how people feel about the potential for outsourcing lethal targeting decisions to machines. 1000 Americans were surveyed, matched on gender, age, race, income, region, education, party identification, voter registration, ideology, political interest and military status. Across the board, 55% of Americans opposed autonomous weapons (nearly 40% were “strongly opposed,”) and a majority (53%) expressed support for the new ban campaign in a second question.