What Exactly is Rational Choice?

12 June 2013, 2234 EDT

I sometimes surprise people when I say that I have no idea what rational choice is.1  How can a game theorist say such a thing?  Especially one who spends so much time on the internet arguing about rational choice?

Well, of course I have some idea what it is.  The point is that there is no coherent body of work which possesses the properties so frequently attributed to “rational choice.”

Approximately 99% of all statements I’ve seen made about rational choice are demonstrably false.  About half assert that rational choice leaves no room for things I’ve seen incorporated into equilibrium-based game-theoretic models.2  The rest assert that rational choice assumes things that many extant models do indeed assume, but which needn’t be assumed.  If I were to observe that [X] figures prominently in continental European scholarship but receives little to no attention in the Anglosphere, I would be a fool to declare that the English language inherently precludes discussion of [X].  Yet that is almost always what people are doing when they say that “rational choice” assumes this, that, or the other thing.

In order to construct a formal model in which actors maximize (expected) utility, the only thing that absolutely must be assumed is that people Have Preferences that Won’t Cycle.  I call this the HPWC criterion.

That’s it.

Really.

One mistake people make is to assume that such models require us to assume that people are “rational” in the ordinary language sense of the word, which roughly corresponds to assuming that human behavior resembles that of Homo Economicus.3   This is understandable — if I could go back and time and tell every game-theorist who ever put word to page that use of the R-word would result in receiving repeated roundhouse-kicks to the ribcage, I would — but it is nonetheless mistaken. Yes, lots of game-theoretic models do assume that human behavior resembles that of Homo Economicus, but again, it’s just as nonsensical to say that “rational choice” requires such an assumption as it is to say that a concept rarely discussed by scholars in the Anglosphere is one that the English language is not equipped to handle.

Without anyone noticing, apparently, many scholars have analyzed game-theoretic models in which people have trouble controlling their own behavior (ex1, ex2, ex3), hold other-regarding preferences(ex1, ex2, ex3), or fail to collect information they know to be both available and pertinent (ex1, ex2, ex3).4  There are even formal models of identity choice (ex1, ex2, ex3).

That brings me to the other common mistake.  Many people believe that such models view human behavior as the outcome of careful, deliberate, conscious choice.  But the “choice” part of “rational choice” is every bit as misunderstood as the “rational” bit.  What these models necessarily assume is that actions expected to bring about outcomes of greater value are chosen over outcomes that bring about less value.  Strictly speaking, most models assume that actors always choose the strategy that maximizes their expected utility, but some models merely assume that such strategies are more likely to be chosen.  And most scholars who seek to evaluate the observable implications of equilibrium-based game-theoretic models do so by determining whether outcomes are more likely to occur under conditions where the strategies that would produce it are more likely to maximize expected utility.  Game theorists tend to be relatively uninterested in whether people are more likely to choose A over B when A yields greater expected utility because they sat down and thought carefully about it as opposed to employing reliable heuristics or whatever.

One manifestation of this misunderstanding is that “rational choice” or “choice-theoretic” work is often said to favor the agency side of the structure-versus-agency debate.  See, for example, this recent post by Dan Nexon, or the paper it’s based on.  I don’t mean to single my Duck colleagues out, though — the notion that rational choice theorists aren’t particularly interested in structure is quite common.  This is both sad and ironic, given that most discussion of the observable implications of game-theoretic models focuses primarily on how equilibrium behavior changes in response to changes in structural conditions.  In fact, if one were to insist on committing the error of arguing that a language prevents its speakers from discussing that which they just so happen to rarely discuss, it would probably be more accurate to say that “rational choice theorists” put all their emphasis on structure and trivialize the role of agency.  I’ve not only heard people express this very criticism, but I know at least one game theorist who says that studying game theory has made him skeptical of the existence of free will.  If someone could explain to me how “rational choice” can simultaneously be guilty overemphasizing agency and yet also trivializing it, that would be great.

Consider the following example.  Quinn is in a long-distance relationship.  S/he has plans to go see his/her significant other this weekend.  However, weather reports are looking grim.  Moreover, Quinn and his/her partner have been arguing a lot lately.  Let’s write down a simple decision-theoretic model.  Quinn goes on the trip if and only if EU(go) > u(stay), with EU(go) = pt + (1-p)(c) and u(stay) = a, where p is the probability that Quinn arrives at his/her destination, t is the payoff from the couple being together, a is the payoff Quinn receives from spending the weekend alone, and c is the cost of getting stuck in an airport for the weekend or getting in an accident on the road or whatever other tragedy might be wrought by inclement weather.  Our very, very simple model tells us that Quinn will cancel his/her trip if p is sufficiently large (specifically, if is greater than c/(t – ac), for those playing along at home).  It also tells us that Quinn would cancel if t was sufficiently low (specifically, if t was less than c((1/p) – 1) + a).  In other words, the models allows either structure or agency to bring Quinn to cancel.  The model itself does not give us any reason to consider one factor to be more important than the other in any universal sense, though a decision to cancel at certain values of p and c might well leave Quinn’s partner feeling pretty concerned about the state of their relationship.

Too straightforward?  Let’s consider another example.

Dan said in his post that “Choice-theoretic approaches tend to treat actors as autonomous from their environments at the moment of interaction, not so experience-near and social-relational alternatives”, where such autonomy implies “that actors are analytically distinguishable from the practices and relations that constitute them.”

Consider the following 2×2 game.

relational

Let hi refer to i‘s history of cooperation, with values closer to 1 indicating that i has generally cooperated and values closer to 0 indicating that i has rarely done so. Let C be some constant greater than 1, reflecting the returns to mutual cooperation, and let D be a constant less than 1, reflecting the tragedy of mutual defection.

Note that the structure of the payoffs implies that the players derive value from cooperating with those who have cooperated with them in the past while deriving value from defecting against those who have defected against them in the past.  In fact, it is straightforward to show that mutual cooperation is sustainable in equilibrium if and only if h≥ 0.5 and h≥ 0.5.

In other words, what we have here is a game-theoretic model in which utility-maximizing behavior not only depends on past patterns of interaction (as it does in many iterated games), but where the value of cooperating or defecting is intrinsically relational. Now, it’s not at all clear to me that this model tells us anything particularly insightful. If I was really trying to advance our understanding of how social relations affect patterns of international cooperation, I’d want a more complicated model. But that wasn’t the point. I was only trying to demonstrate that it’s entirely possible to construct game-theoretic models that do not make the assumptions Dan and PTJ associate with “choice-theoretic” work.5

To sum up, the assumption that people maximize (expected) utility is much weaker than many people realize.  Again, so long as people have preferences, and those preferences don’t cycle, we’re in business.  To be sure, there are situations where even these assumptions don’t apply.  Sometimes people don’t know what they want, and we all hate going out to dinner with these people.  But that’s not what most people have in mind when they talk about the limitations of “rational choice”.

Mostly, they’re telling people to avoid learning a language because most of the people who speak it don’t talk about the right types of thing

 

UPDATE: I should have explained what it means for preferences to cycle.  The idea is this: if I prefer A to B and B to C, but prefer C to A, then my preferences cycle. Forced to make a decision between any two of these three options, I can, but there’s no meaningful sense in which I hold a preference overall because if all three options are put on the table, I can’t pick a favorite. If I preferred A to B and B to C and A to C, on the other hand, then I’d have coherent preferences and there’d be no problem.

There are other points I should have addressed in the post that have come up in the comments.  I encourage you to read them.

1. Following convention, I use the term “rational choice” here to refer to the body of work sometimes also known as “rational choice theory”, not to the act of making choices which are rational.

2. Judging by the way people use the term, it seems clear that not all “rational choice” work is game-theoretic, but any work containing a formal model in which one or more actors maximize (expected) utility is “rational choice” more or less by definition.

3.  See this previous post for a discussion of that assumption.

4. Notice that most of the examples I linked to were published in economics, where one might think the slavish commitment to Homo Economicus would be strongest.

5. Note that the quote I pulled out of Dan’s only offers a claim about what “choice-theoretic” work “tends to” do. However, I have frequently encountered stronger versions of this claim and so thought it worth providing an example of what a counterexample might look like.

6. When I say “mostly”, I mean “mostly.” I understand that many claims made about “rational choice” are not intended in this light. But I do think it’s safe to say that most are.