In some sense, it is with a heavy heart that I write my last permanent contributor blog post at the Duck. I’ve loved being with the Ducks these past years, and I’ve appreciated being able to write weird, often off the track from mainstream political science, blogs. If any of you have followed my work over the years, you will know that I sit at an often-uncomfortable division between scholarship and advocacy. I’ve been one of the leading voices on lethal autonomous weapon systems, both at home in academia, but also abroad at the United Nations Convention on Certain Conventional Weapons, the International Committee for the Red Cross, and advising various states’ governments and militaries. I’ve also been thinking very deeply over the last few years about how the rise, acceptance and deployment of artificial intelligence (AI) in both peacetime and wartime will change our lives. For these reasons, I’ve decided to leave academia “proper” and work in the private sector for one of the leading AI companies in the world. This decision means that I will no longer be able to blog as freely as I have in the past. As I leave, I’ve been asked to give a sort of “swan song” for the Duck and those who read my posts. Here is what I can say going forward for the discipline, as well as for our responsibilities as social scientists and human beings.
First, political science needs to come to grips with the fact that AI is going to radically change the way we not only do research, but how we even think about problems. If we are not careful, AI will literally take our jobs. Already we have substantial efforts underway in various state government labs, defense labs, and other fora, for causal modeling for complex social interactions, such as the onset of war, socio-economic-political dynamics, and how to manipulate all of them too boot. We political scientists value parsimony; we love clear models with few causal variables and lots of data. The reality is, however, a machine learning algorithm doesn’t need any of it. In some instances, it is model free, and it just learns from vast—and I mean vast—amounts of data. If one has a neural network with hundreds of layers, with millions of data points, the system has very little need for humans. Our best datasets are a drop in the bucket. We almost look akin to Amish farmers driving horses with buggies as these new AI gurus pull up to us in their self-driving Teslas. Moreover, the holders of this much data remain in the hands of the private sector in the big six: Amazon, Facebook, Google, Microsoft, Apple and Baidu.
This is not to say that we have nothing to offer, and that we ought to hang up our hats and go home. No, that is not true. But long dead are the days of worrying about degree of freedom problems (yay qualitative folks!) We have an immense and rich knowledge of complex problems and are able to cut through the noise in meaningful and nuanced ways. But we must not eschew or deride the coming AI tide. We must embrace it. We must think seriously about what it will mean for deterrence, war, international political economy and, frankly, power.
Thankfully, for many of us versed in methods and methodology this is not a hard lift. Much of the work in AI is merely math. And we’re good at math. We can understand what is happening in a partially observed Markov Decision Process. We can understand control and optimization. And we can understand the limitations of type of perspective. This is our greatest contribution going forward. Let us bring to bear our great knowledge of statistics, mathematics and social phenomenon to problems that are currently lacking in nuance, subtly, area knowledge and political savvy. I highly recommend that we all go to our respective libraries and begin to teach ourselves algorithms and computer science.
Second, we need to stop looking at engagement with governments or international organizations (like the UN, ICRC, or NGOs) as beneath us. They count. We have impact. Moreover, if we give balanced, thoughtful and well researched advice to these organizations, so much the better for the world. We must dismiss with the idea that a faculty member taking time to travel to the other side of the world to give testimony to 180 state parties is not important to our work. It seems completely backwards and ridiculous. We congratulate the scholar who studies the meeting. Yet we condemn the scholar who participates in the same meeting. This needs to stop. The world’s problems are too big to sit in ivory towers and worry about upsetting various apple carts. So give us credit where credit is due and the incentive structure will change.
Finally, we all talk about how interdisciplinary scholarship is great – “except when we need to hire or tenure it.” I would like to take the opportunity to publically admonish this mantra. The best work is done at the boundaries of disciplines, either when we collaborate or when we teach ourselves something new. The very word “discipline” means that we punish. And when we are “inter” or “multi” disciplinary, that means we get punished by multiple groups. I am hard pressed to understand why we cannot come up with metrics, processes and procedures to evaluate the kind of work we all like. Perhaps, again, it will come back to AI. When we have AI capable of machine reading and interpretation, it will be able to find novel patterns between journal articles, books, disciplines, and it will not be hung up on status, recognition, or title. It cares little if the press is a top University press or a lower status one. This AI will just find solutions. Thus, instead of waiting for this AI to make connections we think are spurious, perhaps we ought to begin engaging with our colleagues in the “hard” sciences and humanities. We ought to reach out the computer scientists, engineers, and neuroscientists.
This entire post is directed, in some ways, to “traditional” political scientists. I do not do this without purpose. However, I’d like to conclude on one small observation. Political theorists and philosophers are also not immune to my call. In fact, they are the best situated to respond to it. Critical thinking, logical inference, and the uncovering of assumptions is exactly what the tech sector needs. It is exactly how we can be most useful and productive in the years to come. If we have obligations as scholars to not merely seek truth, but to make the world a better place and be shepherds to our students, then it is incumbent upon us to reach back through the ages and find wisdom from all those who came before us. To apply that knowledge to our present situation, to find what works and what does not, and to become the new sages of our age. We political scientists and political theorists are at a very important time. We can shift, change, and take our skills to the next level as technology pushes forward. Or, we can become feudal, fight over meagre resources, and sabotage our colleagues’ and students’ careers because of our own insecurity, rivalry, bickering or sub-division lines, and we can continually look at poor data on the same five variables again and again. It is my sincerest hope that we listen to the better angels of our nature. Let us take our responsibilities as scholars, mentors, friends and humans to make the world a better place, let us harken an era of human prosperity and beneficial AI.
Heather, we are so sad to lose you! I can’t think of anyone Google needs more. And academia (and the Duck!) will still be here if you decide to come back! I’d like to think our field is changing in all the ways you suggest – but perhaps as you say too gradually to keep the smartest, most cutting-edge thinkers like you. It’s our loss. I think I speak for us all when I say, you’ll be deeply missed.
Thank you Charli. I will still come to ISA! :-)
I wonder whether our discipline – political science – will be fast enough to adapt, before neighbouring disciplines, such as computer linguistics, are already able to answer large-scale questions that we would like to answer but are unable to do. It would mean investing in technological infrastructure and in skill sets that are not yet well-cherished.
And I think one problem is the inability of political science to think large: The age of AI requires something like a CERN and an equivalent to the Large Hadron Collider for political science, bringing together – as you correctly suggests – colleagues within our discipline but also beyond to answer questions at a planetary scale. Will we be able to do this? More thoughts on this in a post from February: