The Duck of Minerva

The Duck Quacks at Twilight

Better Political Forecasts through Crowdsourcing

August 26, 2009

Dan Drezner links to a recent article by Philip Tetlock on the difficult business of political forecasting. His evaluation of this troubled pastime is accomplished through the review of three recent books that all claim to provide a better way to see the future of politics. His own research (Expert Political Judgment: How Good Is It? How Can We Know?, a fantastic book that you really should read) offers solid reasons to be skeptical of any pronouncements by ‘experts’ that they have some kind of proprietary knowledge about the future.

While I think his critique of the three books and of political forecasting in general is quite good, I find lacking one of his suggestions for how to improve the practice; namely, crowdsourcing. My issues does not lie with the practice of crowdsourcing, but rather the way that Tetlock describes it.

After his review of the three books (and the requisite approaches to forecasting each represents), Tetlock provides a powerful suggestion for how to improve the prediction business–crowdsourcing political forecasts:

Aggregation helps. As financial journalist James Surowiecki stressed in his insightful book The Wisdom of Crowds, if you average the predictions of many pundits, that average will typically outperform the individual predictions of the pundits from whom the averages were derived. This might sound magical, but averaging works when two fairly easily satisfied conditions are met: (1) the experts are mostly wrong, but they are wrong in different ways that tend to cancel out when you average; (2) the experts are right about some things, but they are right in partly overlapping ways that are amplified by averaging. Averaging improves the signal-to-noise ratio in a very noisy world. If you doubt this, try this demonstration. Ask several dozen of your coworkers to estimate the value of a large jar of coins. When my classes do this exercise, the average guess is closer to the truth than 80 or 90 percent of the individual guesses. From this perspective, if you want to improve your odds, you are better-off betting not on George Friedman but rather on a basket of averaged-out predictions from a broad ideological portfolio of George Friedman–style pundits. Diversification helps.

As Dan points out in his post, this suggestion potentially violates two of the necessary conditions of successful outsourcing, and that is the independence of the experts and diversity of their opinion. Dan says it best:

One of the accusations levied against the foreign policy community is that because they only talk to and read each other, they all generate the same blinkered analysis. I’m not sure that’s true, but it would be worth conducting this experiment to see whether a Village of Pundits does a better job than a single pundit.

I would actually go farther than Dan here. The problem with approach isn’t simply that political scientists and pundits may conduct their analysis in an echo chamber (although that is definitely an issue), but rather that for the crowdsourcing of these issues to work properly you would want as diverse a crowd as possible–meaning, you would wan to include individuals from outside of political science and the political pundit community.

Outside of an effective aggregation mechanism, James Surowiecki points to three necessary conditions for successful crowdsourcing:

  1. Diversity of opinion
  2. Independence of those opinions
  3. Decentralization (i.e. ability to lean on local knowledge)

Political Scientists and pundits do not hold a monopoly on useful insights into the world of politics. Other actors have an interest in understanding and predicting what will happen politically, including financial analysts, corporations, journalists, and politicians and citizens around the globe. Each of these groups likely brings their own perspective and lens for analyzing political outcomes to the table, and from a crowdsourcing perspective that is precisely what one would want (diversity, independence, and decentralization). The answer isn’t simply to gather more opinion from political pundits, but rather to gather more opinion from additional actors who represent an even greater diversity of opinion.

I agree with Dan that it would be worthwhile to set up some kind of experiment to determine the optimal composition of a political forecasting crowd. I smell a side project a brewin’….

[Cross-posted at bill | petti]

+ posts

Petti is Associate Director of Insights and Analytics at Alexion . Previously, he served as Lead Data Scientist in the Decision Sciences group at Maritz Motivation and a Global Data Strategist and Subject Matter Expert for Gallup.