In his latest post, PTJ moves us past the worst critiques of “rational choice theory” and focuses on a few more nuanced concerns.1 I’m glad to see the conversation progressing, and this type of exchange is one of the things I love most about academic blogging. However, I find some of PTJ’s arguments problematic.
I argued last week that most criticisms of “rational choice” amount to attempts to persuade graduate students not to learn a language because its speakers don’t often discuss the right topics. That’s worrisome, because insofar as concerns about the narrow focus of those who speak said language are valid, they form the basis of an argument for learning that language and then changing the conversation far more so than they do an argument against learning the language. But PTJ’s concerns are of a different nature — he agrees that it is inappropriate either to criticize rational choice for developing stylized models of an ideal-typical nature2 or to assume that rational choice theory inherently contains any particular substantive assumptions about people’s preferences. His critique, then, is not about the types of things people discuss when they speak the language of rational choice, but the inherent limitations of the language itself. And to some extent, I think he’s right. But not entirely.
A better metaphor for “rational choice” than a foreign language would be a toolbox. My post last week essentially argued that just because most people who go out and buy the toolbox never use anything but the hammer doesn’t make it fair to say that “rational choice” offers nothing to those who have need of a screwdriver. PTJ’s response, in a sense, is thus: 1) the toolbox doesn’t contain a paintbrush, and so may be useful to those who want to hang pictures or install bookshelves but won’t help anyone transform a room entirely; 2) the very act of giving someone a toolbox encourages them to think the world is full of nails and screws; and, most disturbingly, 3) this encourages people to try to fix what ain’t broken. (Okay, this metaphor’s not perfect either. Bear with me.)
Or, in his own words:
because such accounts depend by assumption on constitutively autonomous actors selfishly pursuing their own desires, they are incapable of explaining fundamental changes in those actors themselves.
This is the paintbrush criticism, and it’s entirely valid. I would quibble with the use of the world “selfishly” here, but we’ll get to that below. The more important charge here is that “rational choice” is incapable of explaining fundamental changes in actors themselves. As I’ve argued elsewhere, it’s actually entirely possible to model preference change. But PTJ would note, rightly, that there’s a difference between building a model in which preferences are allowed to vary over time and explaining why they vary. Appeal to exogenous shocks allows one to account for change in a certain sense, but it’s ultimately ad hoc, and there aren’t many factors of interest in social science that are truly exogenous.3 So while “incapable” might be a hair too strong, I’d readily agree that if you want to paint your room a different color, you ought to look elsewhere.
My real issue is with the second and third points.
PTJ says that “rational choice” does not even allow for the possibility of altruism. Given the abundance of theoretical models claiming to do just that, we need to unpack this a bit. What PTJ really means here is that one must choose between assuming that people maximize utility functions and believing in the possibility of moral behavior — one cannot simultaneously do both. That’s a strong charge, and one that can’t be dismissed lightly (hence the length of this post, for which I apologize in advance). However, it rests upon two problematic premises: 1) that Kant’s view of moral behavior is the only valid one (as PTJ basically acknowledges); and a definition of utility maximization that is so narrow that it implies that decision-theoretic and game-theoretic models don’t actually require us to assume that actors maximize utility.
Even if one does not feel that Kant’s remarks on race call his authority on ethical matters into question,4 and even if we ignore the fact that consequentialist moral philosophy is a real thing, when PTJ says
I am not sure what grounds an actor would have for doing so unless one action brought more utility than another to the actor. What does preferring one option over another mean if the actor isn’t comparing the different states of the world and then concluding that in one of them she or he will benefit more?
he effectively proclaims that the word “utility” must be understood to refer to a concept so narrow that it is of no practical relevance. You see, it turns out that there is a methodology which allows us to analyze the behavior of agents who do what is right simply because it is right and which also happens to be so similar to the one in question as to be indistinguishable from it. Thankfully enough, algebra and calculus work the same way whether the inequalities we manipulate or the functions we maximize contain “utilities” or not. And since all these models really assume is that actors choose the action with the larger number associated with it, all we have to do is call those numbers something other than “utilities” and this critique simply goes away. For example, if we were to assume a world full of pacifists, we might assign the cost terms in standard bargaining models values of positive infinity. Or any arbitrarily large number, really. If we did that, our models would predict that war would never occur. I’m not sure what would be interesting about that, but neither do I understand why it would be incorrect to say that peace prevails in such models because of its inherent moral virtue. Put differently, that we happen to call the numbers we attach to outcomes “utilities”, and that most people alive today associate that word strongly with the work of Bentham and Mill, no more justifies PTJ’s comments than calling scyphozoa “jellyfish” makes them fish. Or fills them with jelly.
This, in turn, leads to my objection to the claim that “rational choice” affirms “a certain kind of selfishness.” If we interpret this claim loosely enough, it’s possibly true. But, again, if one observes that most people who purchase toolboxes never bother to use anything but the hammer, wouldn’t it be more appropriate to remind everyone that there are screwdrivers in there as well rather than advising that no one should buy a toolbox?
The real question here is whether thinking in terms of utility maximization intrinsically promotes selfishness, irrespective of the content of those utility functions. That’s what PTJ implies in his post, after all. The appropriate way to answer this question would probably be with an experiment, but allow me to register my skepticism in the interim.
My colleagues have often told me how surprised they are that I’m not more selfish; that I’m so generous with my time and that my behavior in committees and departmental meetings is the least “strategic” — read, selfish — of anyone in the department. In truth, I am behaving quite strategically, though perhaps a Kantian would think this says bad things about me. I place less weight on my own personal preferences than those of my students and colleagues in part because I think that’s the right thing to do5 (and I’m capable of thinking such thoughts, surprisingly enough), but also, I confess, in part because I know what people think about formal theorists. Because I know that I need to go out of my way to prove to my colleagues that I’m decent human being because they all have really strong priors about me based on the work that I do (which isn’t the least bit offensive). That may not be pure altruism, but if that’s the “certain kind of selfishness” PTJ is worried about, I think a lot of departments would be glad to have more “selfish” faculty members.
Setting my own personal experience aside, I’d note that there’s a great deal of work out there where the actor who is assumed to be maximizing some utility function is not an individual, and so cannot be said to be “selfish” in any traditional sense. I speak not just of states interested in promoting some notion of the national interest, but more so of welfare economics and public choice—works which analyze the behavior of an ideal-typical collective, with a particular focus on what is best for the collective. Granted, much of that work embodies a distinctly utilitarian ethics (and here I do mean that in the sense of Bentham and Mill), but concepts such as positive and negative externalities, so central to welfare economics, are used precisely to illustrate the problems with behaving selfishly. I could go on, but hopefully you get the point.
In sum, I agree that “rational choice” is not the best tool for analyzing preference change. I’m not sure that’s much more damning than saying that hammers are best used for driving nails and screwdrivers for turning screws, but it’s worth acknowledging. As for the claims that models populated by utility maximizing agents intrinsically rule out the very possibility of ethical behavior, and that the analysis thereof promotes “a certain type of selfishness”, I’m unimpressed. Depending on how one defines “ethical” and “selfish”, these claims are either true in a sense so narrow as to be trivial (representing a scathing critique of a method no one really employs) or they are of rather dubious veracity. I’m not here to argue that everyone should embrace “rational choice” or utility maximization or game theory or whatever. Not by any means. But I would very much like to see people stop (intentionally or otherwise) devaluing the work of others by offering invalid critiques.
1. I realize that the repeated use of scare-quote is a bit of an eye-sore, but I remain convinced that the term “rational choice” is very nearly devoid of content. As this post demonstrates, even relatively sophisticated critiques of “rational choice” tend either be right because the critic defines the relevant terms in such a way that s/he must be right, in which case the criticism only applies to a body of work so narrow that it’s unclear why anyone should care, or to be flat-out wrong (for reasons discussed in this previous post).
2. I agree that “rational choice” is a form of analyticism rather than neopositivism. As he notes, there are those who’ve tried to force game theory’s square peg into neopositivism’s round hole, including many in the EITM movement. But he’s not alone in thinking this is problematic.
3. I think this criticism can be oversold — I’m not aware of any scientific theory that doesn’t take as given at least a few things whose very existence is viewed by other scholars as needing explanation. Again I must insist that if our standard is to avoid all simplifying assumptions, we might as well pack up and go home, because we’re playing a game no one has yet figured out how to win.
4. As Henderson notes, Kant himself said that “the Negroes of Africa have by nature no feeling that rises above the trifling”, which is why Kant “advises us to use a spli bamboo cane instead of a whip, so that the ‘negro’ will suffer a great deal of pains (because of the ‘negro’s’ thick skin, he would not be racked with sufficient agonies through a whip) but without dying.”
5. I’ve heard that tenure thing is pretty sweet, and I’m not under any illusions about how heavily anyone will weight my work with graduate students. But I sincerely believe that faculty members have an obligation to help people achieve their intellectual potential, so I never refuse to meet with students who want my help. I’ve had senior colleagues take me out to coffee and tell me “listen, this is a state school, not a small liberal arts college. Maybe you could afford to do that at William and Mary, but you can’t here. I’m just looking out for your best interests here.”