This is a guest post by Paul Beaumont, PhD Candidate at the Norwegian University of Life Sciences (NMBU). Previously, he worked as an academic writing advisor at NMBU and as a Junior Research Fellow at the Norwegian Institute of International Affairs (NUPI).
Some time ago, back when Duckpods still happened, Nicholas Onuf talked to Dan Nexon about the impact of World of Our Making (WOOM). Onuf’s masterpiece is rightly credited with founding Constructivism in International Relations. Yet as the two reflected upon the course 1990s constructivism embarked upon, Onuf acknowledged that his linguistic constructivism had not quite fostered the sort of research he had envisioned. While glad of the recognition he received for WOOM, Nick jokingly laments that his book had become “widely cited but never read”. Victim of “drive by citations”, Nexon remarked, “we could do a whole podcast on those alone.”
The following is a guest post by Dan Reiter, the Samuel Dobbs Candler Professor of Political Science at Emory University.
Dr. Cullen Hendrix’s recent Duck of Minerva post on citation counts sparked a vibrant discussion about the value of citation counts as measures of scholarly productivity and reputation.
Beyond the question of whether citation count data should be used, we should also ask, how are citation count data being used? We already know that, for better or worse, citation count data are used in many quantitative rankings, such as those produced by Academic Analytics and the National Research Council. Many journal rankings use citation count data.
We should also ask this question: How are departments using citation count data for promotion decisions, a topic of central interest for all scholars?
The following is a guest post by Jeff Colgan, Richard Holbrooke Assistant Professor at Brown University, and is @JeffDColgan on Twitter.
It’s that time of year again, when professors are designing syllabi as fast as they can with deliberation and care. Recently I analyzed IR syllabi for PhD students. The data suggest a gender bias that instructors could easily correct.
The case that gender diversity is good for IR and political science has been made elsewhere, repeatedly and persuasively. According to APSA, women are 42 percent of graduate students in political science (in the US), but only 24 percent of full-time professors. If we assume that part of what it means to encourage female students to pursue academia in IR involves showing them examples of great research by women, early and often, then we ought to pay attention to our syllabi.