Tag: philosophy of social science

A Gilded Age of Social Science: Big Data Governance, Neopositivist Social Science and Covid-19

This is a guest post by Dr. Adam B. Lerner, Assistant Professor of International Relations at Royal Holloway, University of London and Deputy Director of the Royal Holloway Centre for International Security.

As an American living in London, I wake up every morning and check statistics: the number of positive cases reported the prior day in both the UK and US, the number of deaths, hospitalizations and vaccine doses administered, the percentage of the population fully vaccinated and the number of days until the government promises to re-evaluate the lockdown’s end. These numbers determine when I might see my family again, when I might receive a vaccine or even when I might be able to meet a friend for a much-needed outdoor pint.

Of course, beneath these numbers may lie unspeakable loss to families and communities. Nevertheless, their quantification and continual visualization and dissemination in mass media can also make them feel like talismans, ripped from context, critical reflection and, oftentimes, the lives of real people. Indeed, their dominance in public discourse of the pandemic reflects the encroachment of neopositivist social science on lives and livelihoods in new ways—ways that have crowded out numerous other important considerations. 

Continue reading

Qualitative Research Does Not Exist

This is a guest post by Simon Frankel Pratt. He is a lecturer in the School of Sociology, Politics, and International Studies at the University of Bristol.

In the social sciences, research and data are often divided into the categories ‘quantitative’ and ‘qualitative’. This is incoherent and should stop. There’s nothing informative in this distinction in terms of the logic of enquiry, the mode of inference, or the way data are used to support claims about the world. There is nothing methodological about it. But it won’t stop because if it did, our discipline would further marginalise non-positivist research.

complained about this on Twitter, and I will expand on these complaints here. I’ll start with the philosophy of social science problems. But then, I’ll talk about power and hegemony.

Continue reading

The Interdisciplinarity Shibboleth: Christakis Edition

Andrew Gelman provides a nice rejoinder to Nicholas Christakis’ New York Times op-ed, “Let’s Shake up the Social Sciences.” Fabio Rojas scores the exchange for Christakis, but his commentators provide convincing rebuttals to Rojas. Once again, I suspect reactions to the column are driven by homophily rather than network effects. But all this aside, Christakis makes an interesting claim about the evidence for stagnation:

In contrast, the social sciences have stagnated. They offer essentially the same set of academic departments and disciplines that they have for nearly 100 years: sociology, economics, anthropology, psychology and political science. This is not only boring but also counterproductive, constraining engagement with the scientific cutting edge and stifling the creation of new and useful knowledge. Such inertia reflects an unnecessary insecurity and conservatism, and helps explain why the social sciences don’t enjoy the same prestige as the natural sciences.

Instead, we should provide more funding for people like Christakis create departments that reflect the “cutting edge” of interdisciplinarity:

It is time to create new social science departments that reflect the breadth and complexity of the problems we face as well as the novelty of 21st-century science. These would include departments of biosocial science, network science, neuroeconomics, behavioral genetics and computational social science. Eventually, these departments would themselves be dismantled or transmuted as science continues to advance.

Some recent examples offer a glimpse of the potential. At Yale, the Jackson Institute for Global Affairs applies diverse social sciences to the study of international issues and offers a new major. At Harvard, the sub-discipline of physical anthropology, which increasingly relies on modern genetics, was hived off the anthropology department to make the department of human evolutionary biology. Still, such efforts are generally more like herds splitting up than like new species emerging. We have not yet changed the basic DNA of the social sciences. Failure to do so might even result in having the natural sciences co-opt topics rightly and beneficially in the purview of the social sciences.

Continue reading


The Society of Individuals

Although I have made many of the points I am about to make in comments posted on Phil’s and Eric’s posts about rational choice theory over the past week, what I want to do at this point is to pull the whole thing together and make clear just why I still maintain that rational choice theory — and indeed, the broader decision-theoretical world of which rational choice theory constitutes just a particular, heavily-mathematized province — endorses and naturalizes a form of selfishness that is ultimately corrosive of human community and detrimental to the very idea of moral action. This is not a social-scientific criticism, and has nothing to do with the explanatory power of decision-theoretic accounts. I am not suggesting that there are empirical phenomena that for some intrinsic reason can’t be accounted for in decision-theoretic terms; indeed, given a sufficiently clever decision theorist, armed with game theory on the one hand and some individual psychology on the other, I think it likely that everything of interest (except, as Phil and I acknowledge, fundamental changes in the constitution of actors themselves — this is his “paintbrush” point) could be explained decision-theoretically.

My point — my plea — is that it shouldn’t be. The “model of man” (sexism in original, and that’s almost certainly important…) at the heart of decision-theoretic accounts begins, as a matter of assumption, with individuals isolated from one another in a deep ontological sense. Such individuals can’t engage in moral action; the best they can do is to act in ways that happen to correspond with moral codes. Such individuals can’t make commitments to one another; the best they can do is associate and interact with one another as long as there are more benefits from doing so than from striking off in another direction. And such individuals can’t actually be members of communities, since their place in any given community is only ever contingent on factors over which they exercise no influence: namely, the strategic environment and their own preferences. Deploying explanatory models and theories that stem from such a notion of the human person, even though this is an ideal-type rather than an actual description or an explicit normative recommendation, reinforces the notion that this is how people are and should be, and that the most they can do is form, in Norbert Elias’ apt phrase, a “society of individuals.” In my view, reducing social outcomes to individual decisions is thus problematic for ethical, rather than explanatory, reasons. Continue reading


A Certain Kind of Selfishness

This is more of a riff on Phil’s post from last week than a direct reply; the post that Dan and I wrote addresses more directly the issue of actor autonomy that we think Phil misunderstood us on we and Phil were clearly on different semantic pages, so I am not going to go back over that ground here. Instead — and since we all basically agree that rational choice theory, as a species of decision-theoretic analysis, is located someplace in the tension between self-action and inter-action — I want to pursue a more specific point, the criticism of decision-theoretic accounts on both social-scientific and ethical grounds. In terms of the former register, there are kinds of questions that decision-theoretic accounts are simply not adequate to help us address. In terms of the latter register, the naturalization of individual selfishness that is inherent to decision-theoretical accounts regardless of the preferences held by individual actors and how self-regarding or other-regarding they might be, provides an important avenue on which all such theories can be called into question.
Continue reading


Mapping IR Theory

Thanks to the patience of the former EJIR editorial team, PTJ and I will have an article in the forthcoming special issue on the “End of IR Theory?” Only the first 35-40% resembles the working paper (PDF) we posted at the Duck. Even the name has changed.

We still argue in favor of thinking about international-relations theory as dealing with “scientific ontologies”: “catalog[s]–or map[s]–of the basic substances and processes that constitute world politics.” As we note in both the final version and the working paper, this includes:

  • The actors that populate world politics, such as states, international organizations, individuals, and multinational corporations;
  • The contexts and environments within which those actors find themselves;
  • Their relative significance to understanding and explaining international outcomes;
  • How they fit together, such as parts of systems, autonomous entities, occupying locations in one or more social fields, nodes in a network, and so forth;
  • What processes constitute the primary locus of scholarly analysis, e.g., decisions, actions, behaviors, relations, and practices; and
  • The inter-relationship among elements of those processes, such as preferences, interests, identities, social ties, and so on.

Continue reading


The State of Political Science

It may, however, be appropriate to point out that the persisting bipolar conflict in the  field between humanists and behavioralists conceals a lively polemic within both camps  and perhaps particularly among the so-called  behavioralists. Among the modernists neologisms burst like roman candles in the sky, and wars of epistemological legitimacy are fought. The devotees of rigor and theories of the middle range reject more speculative general theory as  non-knowledge; and the devotees of general theory attack those with more limited scope as technicians, as answerers in search of questions.

Continue reading


Null Results

Chris Blattman links to a paper (PDF) that finds no relationship between oil production and violence. He comments:

Regardless what you believe, however, there’s a striking shortage of null results papers published in top journals, even when later papers overturn the results. I’m more and more sickened by the lack of interest in null results. All part of my growing disillusionment with economics and political science publishing (and most of our so-called findings). Sigh…

To which I say, “Yes. Yes. A thousand times yes!”

If we really care about truth-seeking in the social sciences, let alone our own discipline of political science, we would consider null results on significant issues of critical importance. We would certainly consider them more important than the far-too-common paper with a  “positive” result that results from, well, efforts to get to a positive result via “tweaking” (e.g.andalso).

Continue reading


Reply on “physics envy”

Because not everyone reads comment threads, in part because of the way that people engage with The Duck via RSS readers, and because think the questions involved are really important ones, I’m going to post my reaction to PM’s “Yes, I do envy physicists”as a separate post of its own:

Man, I was right with you until your advance response to commenters. Making “data and its analysis central to the undergraduate experience” — a.k.a. emphasizing undergraduate research, such that one of the primary learning outcomes of a BA in International Relations or Government or Political Science or whatever is the critical intellectual disposition necessary to be both an intelligent producer of knowledge about the social and political world and an intelligent consumer of other knowledge-claims about that world — is spot-on. (And part of why one of the first administrative changes I made as Associate Dean in my school is to establish the position of Undergraduate Research Coordinator, whose job is both to coordinate our methodological course offerings and to make sure that upper-division classes feature opportunities to actually use those techniques in research projects as appropriate.) Now, you and I (probably) disagree about the relative prominence of statistical training in the enterprise of undergraduate research, since as you know I am a lot more small-c-catholic about (social-)scientific methodology than, well, most people. But hey, we’re in the same basic place…

…and then you had to go and diss history and theory. This is counterproductive for at least three reasons:

1) one can’t do good research without both theory and methodology, and the point of the exercise is to help people learn how to do good research, not how to use methodological tools in isolation.

2) de-emphasizing history and theory at the undergraduate level basically guarantees that “re-emphasizing” it at the graduate level ain’t never gonna happen. Teched-up statisticians going to graduate programs aren’t likely to willingly seek out unfamiliar ways of thinking about knowledge-production, and let’s be honest, theory — whatever your favorite flavor of theory — isn’t like methodology in general and isn’t like statistical-comparative methodology of the quantitative kind in particular. So you’ll either get a) statisticians launching smash-and-grab raids on history and theory for a justificatory fig-leaf for their operational definitions of variables and for supposedly “objective” data to use in testing their hypotheses (hey, wait a second, that sounds familiar…oh yeah, it’s what “mainstream U.S. PoliSci” does ALL THE FRAKKING TIME ALREADY); or b) existential crises when students discover that everything they learned in undergrad — I am referring to the “hidden curriculum” here, the conclusions that students will draw from the emphasis on statistics and the de-emphasis on history and theory — is wrong or at least seriously incomplete. Then you factor in the professional incentives for publication in “top-tier” US journals, and the lack of ability to meaningfully evaluate non-statistical work if one hasn’t spent some serious time training in how do appreciate that work, and you get…well, you get basically what we have at the moment in US PoliSci, but worse.

3) since we’re social scientists and not statisticians (or discourse analysts, or ethnographers, or surveyors, or…), methodology is a means to an end, and that end is or should always be the explanation of stuff in the social world. A social scientist teaching stats should be teaching about how one uses stats to make sense of the social world; ditto a social scientist teaching whatever methodology or technique one is teaching. Yes, the disciplinary specialists in those tools are not going to be particularly pleased with everything that we do, but that’s okay, since we’re on a different mission. And that mission necessitates history and theory just as much as it necessitates methodology (and, I would argue, a broad and diverse set of methodological literacies). If one tries to play the game where one looks for external validation of one’s methodological chops by people whose discipline specializes in a particular set of tools, then one is probably going to lose, or one is going to be dismissed as derivative. We’re not about to locate the Higgs boson with anything we do in the social sciences, and we’re not likely to contribute to any other discipline (I mean, it happens, but I think the frequency is pretty low). What we are going to do, or at least keep on trying to do, is to enhance our understanding of the social world. More stats training — more methodology training of any sort — at the undergraduate level is not necessarily a means to that end, unless it occurs in conjunction with more history and theory.

None of this is going to help the public understand what we do any better. We don’t make nuclear bombs or cel phones or (un)employment, and the U.S. is kind a a dispositionally anti-intellectual place (has been since the founding of the country…see Tocqueville, Hofstader, etc.) theory isn’t respected as a contribution. Everybody wants results that they can easily see — can you build a better mousetrap — and the vague sense that physicists have something to do with engineers and economists have something to do with entrepreneurs (who are, I think, the actual figures that get public prestige, because they do practical stuff) shores up their respective social value. But us, what we have a vague connection to are POLITICIANS, and everybody hates them. So that’s an uphill battle we’re probably fated to lose. So my punchline, which I’ve given many times before: our primary job is teaching students, our scholarship makes us better teachers, and the place to point for evidence of our social value is to those who graduate from our colleges and universities and the people they’ve become as a result of dwelling for a time in the happily intellectual and critical environment we contribute to producing on campus.


Experiments, Social Science, and Politics

[This post was written by PTJ]

One of the slightly disconcerting experiences from my week in Vienna teaching an intensive philosophy of science course for the European Consortium on Political Research involved coming out of the bubble of dialogues with Wittgenstein, Popper, Searle, Weber, etc. into the unfortunate everyday actuality of contemporary social-scientific practices of inquiry. In the philosophical literature, an appreciably and admirably broad diversity reigns, despite the best efforts of partisans to tie up all of the pieces of the philosophy of science into a single and univocal whole or to set perennial debates unambiguously to rest: while everyone agrees that science in some sense “works,” there is no consensus about how and why, or even whether it works well enough or could stand to be categorically improved. Contrast the reigning unexamined and usually unacknowledged consensus of large swaths of the contemporary social sciences that scientific inquiry is neopositivist inquiry, in which the endless drive to falsify hypothetical conjectures containing nomothetic generalizations is operationalized in the effort to disclose ever-finer degrees of cross-case covariation among ever-more-narrowly-defined variables, through the use of ever-more sophisticated statistical techniques. I will admit to feeling more than a little like Han Solo when the Millennium Falcon entered the Alderaan system: “we’ve come out of hyperspace into a meteor storm.”

Two examples leap to mind, characteristic of what I will somewhat ambitiously call the commonsensical notion of inquiry in the contemporary social sciences. One is the recent exchange in the comments section of PM’s post on Big Data (I feel like we ought to treat that as a proper noun, and after a week in a German-speaking country capitalizing proper nouns just feels right to me) about the notion of “statistical inference,” in which PM and I highlight the importance of theory and methodology to causal explanation, and Eric Voeten (unless I grossly misunderstand him) suggests that inference is a technical problem that can be resolved by statistical techniques alone. The second is the methodological afterword to the AAC&U report “Five High-Impact Practices” (the kind of thing that those of us who wear academic administrator hats in addition to our other hats tend to read when thinking about issues of curriculum design), which echoes some of the observations made in the main report on the methodological limitations of research on practices higher education such as first-year seminars and undergraduate research opportunities — what is called for throughout is a greater effort to deal with the “selection bias” caused by the fact that students who select these programs as undergraduates might be those students already inclined to perform well on the outcome measures that are used to evaluate the programs (students interested in research choose undergraduate research opportunities, for example), and therefore it is difficult if not impossible to ascertain the independent impact of the programs themselves. (There are also some recommendations about defining program components more precisely so that impacts can be further and more precisely delineated, especially in situations where a college or university’s curriculum contains multiple “high-impact practices,” but those just strengthen the basic orientation of the criticisms.)

The common thread here is the neopositivist idea that “to explain” is synonymous with “to identify robust covariations between,” so that “X explains Y” means, in operational terms, “X covaries significantly with Y.” X’s separability from Y, and from any other independent variables, is presumed as part of this package, so efforts have to be taken to establish the independence of X. The gold standard for so doing is the experimental situation, in which we can precisely control for things such that two populations only vary from one another in their value of X; then a simple measurement of Y will show us whether X “explains” Y in this neopositivist sense. Nothing more is required: no complex assessments of measurement error, no likelihood estimates, nothing but observation and some pretty basic math. When we have multiple experiments to consider, conclusions get stronger, because we can see — literally, see — how robust our conclusions are, and here again a little very basic math suffices to give us a measure of confidence in our conclusions.
But note that these conclusions are conclusions about repeated experiments. Running a bunch of trials under experimental conditions allows me to say something about the probability of observing similar relationships the next time I run the experiment, and it does so as long as we adopt something like Karl Popper’s resolution of Hume’s problem of induction: no amount of repeated observation can ever suffice to give us complete confidence in the general law (or: nomothetic relationship, since for Popper as for the original logical positivists in the Vienna Circle a general law is nothing but a robust set of empirical observations of covariation) we think we’ve observed in action, but repeated failures to falsify our conjecture is a sufficient basis for provisionally accepting the law. The problem here is that we’ve only gotten as far as the laboratory door, so we know what is likely to happen in the next trial, but what confidence do we have about what will happen outside of the lab? The neopositivist answer is to presume that the lab is a systematic window into the wider world, that statistical relationships revealed through experiment tell us something about one and the same world — a world the mind-independent character of which underpins all of our systematic observations — that is both inside and outside of the laboratory. But this is itself a hypothetical conjecture, for a consistent neopositivism, so it too has to be tested; the fact that lab results seem to work pretty well constitute, for a neopositivist, sufficient failures to falsify that it’s okay to provisionally accept lab results as saying something about the wider world too.
Now, there’s another answer to the question of why lab results work, which has less to do with conjecture and more to do with the specific character of the experimental situation itself. In a laboratory one can artificially control the situation so that specific factors are isolated and their independent effects ascertained; this, after all, is what lab experiments are all about. (I am setting aside lab work involving detection, because that’s a whole different kettle of fish, philosophically speaking: detection is not, strictly speaking, an experiment, in the sense I am using the term here. But I digress.) As scientific realists at least back to Rom Harré have pointed out, this means that the only way to get those results out of the lab is to make two moves: first, to recognize that what lab experiments do is to disclose cause powers, defined as tendencies to produce effects under certain circumstances, and second, to “transfactually” presume that those causal powers will operate in the absence of the artificially-designed laboratory circumstances that produce more or less strict covariations between inputs and outputs. In other words, a claim that this magnetic object attracts this metallic object is not a claim about the covariation of “these objects being in close proximity to one another” and “these objects coming together”; the causal power of a magnet to attract metallic objects might or might not be realized under various circumstances (e.g. in the presence of a strong electric field, or the presence of another, stronger magnet). It is instead not a merely behavioral claim, but a claim about dispositional properties — causal powers, or what we often call in the social sciences “causal mechanisms” — that probably won’t manifest in the open system of the actual world in the form of statistically significant covariations of factors. Indeed, realists argue, thinking about what laboratory experiments do in this way actually gives us greater confidence in the outcome of the next lab trial, too, since a causal mechanism is a better place to lodge an account of causality than a mere covariation, no matter how robust, could ever be.
Hence there are at least two ways of getting results out of the lab and into the wider world: the neopositivist testing of the proposition that lab experiments tell us something about the wider world, and the realist transfactual presumption that causal powers artificially isolated in the lab will continue to manifest in the wider world even though that manifestation will be greatly affected by the sheer complexity of life outside the lab. Both rely on a reasonably sharp laboratory/world distinction, and both suggest that valid knowledge depends, to some extent, on that separation. This impetus underpins the actual lab work in the social sciences, whether psychological or or cognitive or, arguably, computer-simulated; it also informs the steady search of social scientists for the “natural experiment,” a situation close enough to a laboratory experiment that we can almost imagine that we ran it in a lab. (Whether there are such “natural experiments,” really, is a different matter.)
Okay, so what about, you know, most of the empirical work done in the social sciences, which doesn’t have a laboratory component but still claims to be making valid claims about the causal role of independent factors? Enter “inferential statistics,” or the idea that one can collect open-system, actual-world data and then massage it to appropriately approximate a laboratory set-up, and draw conclusions from that.

Much of the apparatus of modern “statistical methods” comes in only when we don’t have a lab handy, and is designed to allow us to keep roughly the same methodology as that of the laboratory experiment despite the fact that we don’t, in fact, run experiments in controlled environments that allow us to artificially separate out different causal factors and estimate their impacts. Instead, we use a whole lot of fairly sophisticated mathematics to, put bluntly, imagine that our data was the result of an experimental trial, and then figure out how confident we can be in the results it generated. All of the technical apparatus of confidence intervals, different sorts of estimates, etc. is precisely what we would not need if we had laboratories, and it is all designed to address this perceived lack in our scientific approach. Of course, the tools and techniques have become so naturalized, especially in Political Science, that we rarely if ever actually reflect on why we are engaged in this whole calculation endeavor; the answer goes back to the laboratory, and its absence from our everyday research practices.
But if we put the pieces together, we encounter a bit of a profound problem: we don’t have any way of knowing whether these approximated labs that we build via statistical techniques actually tell us anything about the world. This is because, unlike an actual lab, the statistical lab-like construction (or “quasi-lab”) that we have built for ourselves has no clear outside — and this is not simply a matter of trying to validate results using other data. Any actual data that one collects still has to be processed and evaluated in the same way as the original data, which — since that original process was, so to speak, “lab-ifying” the data — amounts, philosophically speaking, to running another experimental trial in the same laboratory. There’s no outside world to relate to, no non-lab place in which the magnet might have a chance to attract the piece of metal under open-system, actual-world conditions. Instead, in order to see whether the effects we found in our quasi-lab obtain elsewhere, we have to convert that elsewhere into another quasi-lab. Which, to my mind, raises the very real possibility that the entire edifice of inferential statistical results is a grand illusion, a mass of symbols and calculations signifying nothing. And we’d never know. It’s not like we have the equivalent of airplanes flying and computers working to point to — those might serve as evidence that somehow the quasi-lab was working properly and helping us validate what needs to be validated, and vice versa. What we have is, to be blunt, a lot of quasi-lab results masquerading as valid knowledge.
One solution here is to do actual lab experiments, the results of which could be applied to the non-lab world in a pretty straightforward way whether one were a neopositivist or a realist: in neither case would one be looking for covariations, but instead one would be looking to see how and to what degree lab results manifested outside of the lab. Another solution would be to confine our expectations to the next laboratory trial, which would mean that causal claims would have to be confined to very similar situations. (An example, since I am writing this in Charles De Gaulle airport, a place where my luggage has a statistically significant probability of remaining once I fly away: based on my experience and the experience of others, I have determined that CDG has some causal mechanisms and process that very often produce a situation where luggage does not make it on to a connecting flight, and this is airline-invariant as far as I can tell. It is reasonable for me to expect that my luggage will not make it into my flight home, because this instance of my flying through CDG is another trial of the same experiment, and because so far as I know and have heard nothing has changed at CDG that would make it any more likely that my bag will make the flight I am about to board. What underpins my expectation here is the continuity of the causal factors, processes, and mechanisms that make up CDG, and generally incline me to fly through Schipol or Frankfurt instead whenever possible … sadly, not today. This kind of reasoning also works in delimited social systems like, say, Major League Baseball or some other sport with sufficiently large numbers of games per season.) Not sure how well this would work in the social sciences, unless we were happy only being able to say things about delimited situations; this might suffice for opinion pollsters, who are already in the habit of treating polls as simulated elections, and perhaps one could do this for legislative processes so long as the basic constitutional rules both written and unwritten remained the same, but I am not sure what other research topics would fit comfortably under this approach.
[A third solution would be to say that all causal claims were in important ways ideal-typical, but explicating that would take us very far afield so I am going to bracket that discussion for the moment — except to say that such a methodological approach would, if anything, make us even more skeptical about the actual-world validity of any observed covariation, and thus exacerbate the problem I am identifying here.]
But we don’t have much work that proceeds in any of these ways. Instead, we get endless variations on something like the following: collect data; run statistical procedures on data; find covariation; make completely unjustified assumption that the covariation is more than something produced artificially in the quasi-lab; claim to know something about the world. So in the AAC&U report I referenced earlier, the report’s authors and those who wrote the Afterword are not content with simply content to collect examples of innovative curriculum and pedagogy in contemporary higher education; they want to know, e.g., if first-year seminars and undergraduate research opportunities “work,” which means whether they significantly covary with desired outcomes. So to try to determine this, they gather data on actual programs … see the problem?

The whole procedure is misleading, almost as if it made sense to run a “field experiment” that would conduct trials on the actual subjects of the research to see what kinds of causal effects manifested themselves, and then somehow imagine that this told us something about the world outside of the experimental set-up. X significantly covarying with Y in a lab might tell me something, but X covarying with Y in the open system of the actual world doesn’t tell me anything — except, perhaps, that there might be something here to explain. Observed covariation is not an explanation, regardless of how complex the math is. So the philosophically correct answer to “we don’t know how successful these programs are” is not “gather more data and run more quasi-experiments to see what kind of causal effects we can artificially induce”; instead, the answer should be something like “conceptually isolate the causal factors and then look out into the actual world to see how they combine and concatenate to produce outcomes.” What we need here is theory and methodology, not more statistical wizardry.

Of course, for reasons having more to do with the sociology of higher education than with anything philosophically or methodologically defensible, academic administrators have to have statistically significant findings in order to get the permission and the funding to do things that any of us in this business who think about it for longer than a minute will agree are obviously good ideas, like first-year seminars and undergraduate research opportunities. (Think about it. Think … there, from your experience as an educator, and your experience in higher education, you agree. Duh. No statistics necessary.) So reports like the AAC&U report are great political tools for doing what needs to be done.

And who knows, they might even convince people who don’t think much about the methodology of the thing — and in my experience many permission-givers and veto-players in higher education don’t think much about the methodology of such studies. So I will keep using it, and other such studies, whenever I can, in the right context. Hmm. I wonder if that’s what goes on when members of our tribe generate a statistical finding from actual-world data and take it to the State Department or the Defense Department? Maybe all of this philosophy-of-science methodological criticism is beside the point, because most of what we do isn’t actually science of any sort, or even all that concerned with trying to be a science: it’s just politics. With math. And a significant degree of self-delusion about the epistemic foundations of the enterprise.


Knowing and the Known

Although the majority of the offerings in the European Consortium on Political Research’s inaugural Winter School in Methods and Techniques (to be held in Cyprus in February 2012) are pretty firmly neopositivist, at the risk of sound like a shameless self-promoter I’d like to call your attention to course A6, “Knowing and the Known: Philosophy and Methodology of Social Science,” which I am teaching. The short description of this course is:

“The social sciences have long been concerned with the epistemic status of their empirical claims. Unlike in the natural sciences, where an evident record of practical success tends to make the exploration of such philosophical issues a narrowly specialized endeavour, in the social sciences, differences between the philosophies of science underpinning the empirical work of varied researchers produces important and evident differences in the kind of social-scientific work that they do. Philosophy of science issues are, in this way, closer to the surface of social-scientific research, and part of being a competent social scientist involves coming to terms with and developing a position on those issues. This course will provide a survey of important authors and themes in the philosophy of the social sciences, concentrating in particular on the relationship of the mind to the social world and on the relationship between knowledge and experience; students will have ample opportunities to draw out the implications of different stances on these issues for their concrete empirical research.”

Further details, including the long course description, below the fold.


The First ECPR Winter School in Methods and Techniques – REGISTER NOW!
Eastern Mediterranean University, Famagusta, North Cyprus
11th – 18th February 2012

We are very pleased to announce that registration for the first Winter School in Methods and Techniques (WSMT) is now officially open!

This year’s school is being held at the Eastern Mediterranean University in the beautiful surroundings of Famagusta. Register for your course(s) before 1st November and you will receive a special ‘Early Bird Discount’. All information, including registration form, to be found via https://new.ecprnet.eu/MethodSchools/WinterSchools.aspx (move your cursor on “Winter School” –> “2012 – Cyprus” to consult the various pages.

The Winter School will be an annual event that is complementary to the ECPR’s Summer School and there will be a loyalty discount for participants who wish to take part in the 3 step programme at both of these schools (further details on the “course fees” page).
The comprehensive programme consists of introductory courses and advanced courses, in a one-week format, suitable for advanced students and junior researchers in political science and its adjacent disciplines. There will also be the opportunity to participate in one of the software training courses.
Intermediate-level courses will continue to be held at the 2012 Summer School in Methods and Techniques in Ljubljana, in either one two-week or two consecutive one-week courses.

If you have any questions or require any further information please contact Denise Chapman, Methods School Manager on +44 (0)1206 874115 or by email: dchap@essex.ac.uk

Best regards,
Profs. Benoît Rihoux & Bernhard Kittel, Academic convenors


This course is a broad survey of epistemological, ontological, and methodological issues relevant to the production of knowledge in the social sciences. The course has three overlapping and interrelated objectives:

  • to provide you with a grounding in these issues as they are conceptualized and debated by philosophers, social theorists, and intellectuals more generally;
  • to act as a sort of introduction to the ways in which these issues have been incorporated (sometimes— often—inaccurately) into different branches of the social sciences;
  • to serve as a forum for reflection on the relationship between these issues and the concrete conduct of research, both your own and that of others.

That having been said, this is neither a technical “research design” nor a “proposal writing” class, but is pitched as a somewhat greater level of abstraction. As we proceed through the course, however, you should try not to lose sight of the fact that these philosophical debates have profound consequences for practical research. Treat this course as an opportunity to set aside some time to think critically, creatively, and expansively about the status of knowledge, both that which you have produced and will produce, and that produced by others.

The “science question” rests more heavily on the social sciences than it does on the natural sciences, for the simple reason that the evident successes of the natural sciences in enhancing the human ability to control and manipulate the physical world stands as a powerful rejoinder to any scepticism about the scientific status of fields of inquiry like physics and biology. The social science have long laboured in the shadow of those successes, and one popular response has been to try to model the social sciences on one or another of the natural sciences; this naturalism forms one of the recurrent gestures in the philosophy of the social sciences, and we will trace it through its incarnation in the Logical Positivism of the Vienna Circle and then into the “post-positivist” embrace of falsification as the mark of a scientific statement. Problems generated by the firm emphasis on lawlike generalizations through both of these naturalistic approaches to social science lead to the reformulated naturalism of critical realism, as well as to the rejection of naturalism by pragmatists and followers of classical sociologists like Max Weber. Finally, we will consider the tradition of critical theory stemming from the Frankfurt School, and the contemporary manifestation of that commitment to reflexive knowledge in feminist and post-colonial approaches to social science.

While not an exhaustive survey of the philosophy of the social sciences, this course will serve as an opportunity to explore some of the perennial issues of great relevance to the conduct of social-scientific inquiry, and will thus function as a solid foundation for subsequent reading and discussion—and for the practice of social science. Throughout the course we will draw on exemplary work from Anthropology, Economics, Sociology, Political Science; students will be encouraged to draw on their own disciplines as well as these others in producing their reflections and participating in our lively discussions.


Breakout Star Patrick T. Jackson Takes APSA by Storm

The Canard
“All the fake news that’s fit to print”


The American Political Science Association is abuzz with talk about a breakout book by an up and coming star in international relations theory, Patrick T. Jackson. While Jackson had previously had strong indie credentials established through gritty work on the strategic construction of the notion of Western civilization by the United States after WWII, The Conduct of Inquiry in International Relations looks to give Jackson the broader notoriety in the field that many have long thought he deserved. The book blurb proposes that it “pops a cap in the ass of the bitch-ass notion of a single unified scientific method, and proposes a framework that clarifies the variety of ways that IR scholars establish whether their empirical claims are correctamundo.” Sales of his book were brisk at the Routledge booth where the bold cover was also a hit. It is the one that says “bad mother*@er” on it.

Jackson’s book has elicited a firestorm of criticism from game theorists, stats jocks and other meth-heads in the discipline. Jackson’s response? “If my answers frighten you, then you should cease asking scary questions.”*

In one of its most quotable passages, Jackson writes: “The path of the social scientist is beset on all sides by the inequities of the pseudo-positivists and the tyranny of the APSR. More scientific is he who, in the name of charity and good will, shepherds the grad students through the dark valley of political science, for he is truly his brother’s keeper and the finder of lost insights in international relations. And I will strike down upon thee with great vengeance and furious anger those who attempt to poison and destroy my pluralist brothers. And you will know I am the Director of General Education at American University when I lay my vengeance upon you.”

Jackson’s new book comes at an increasingly self-reflective time in international relations. Most recently prominent scholars David Lake and Peter Katzenstein have offered their ideas about what ails the discipline. When asked to comment

“Get these motherfu*cking positivists out of my motherf*cking discipline!”

on their contributions, Jackson responded in his inimitable fashion. “Normally, both their asses would be dead as f*&king fried chicken, but they happened to pull this sh*t while I’m in a transitional period so I don’t wanna kill them. I wanna help them.”

Jackson’s academic style is brusque and confrontational. After a spirited give and take at an APSA panel with colleague Dan Nexon on the use of Lakatos in assessing paradigmatic progress in international relations, Jackson bristled sarcastically, “Check out the big brain on Dan! Oh, you were finished!? Well, allow me to retort.” When Nexon pushed Jackson on his belief of the inapplicability of Popperian falsification methods to the social subject matter of political science, Jackson rejoinded: “English, motherf*@ker! Do you speak it?” He then asked rhetorically, somewhat incongruously, “Do you know what they call hermeneutics in France? L’herméneutique.” His hotel room was later found trashed.

While Jackson has never formally acknowledged it, it is open knowledge in the field that his middle initial stands for “Thaddeus.” Fearing for their lives, however, the paper could not get anyone to speak to that on the record. Jackson carries a briefcase with him at all times. Its contents are unknown, but there is a rumor that it contains Bruce Bueno de Mesquita’s soul.

*I borrowed this from techne. See below. It was just too good to pass up.


Quote of the Day


In short, there’s no reason at all to consider microeconomics the “real” economics and macroeconomics some kind of flaky impostor. Yes, micro is a lot more rigorous — but if it’s rigorously wrong, who cares?


© 2021 Duck of Minerva

Theme by Anders NorenUp ↑