In some sense, it is with a heavy heart that I write my last permanent contributor blog post at the Duck. I’ve loved being with the Ducks these past years, and I’ve appreciated being able to write weird, often off the track from mainstream political science, blogs. If any of you have followed my work over the years, you will know that I sit at an often-uncomfortable division between scholarship and advocacy. I’ve been one of the leading voices on lethal autonomous weapon systems, both at home in academia, but also abroad at the United Nations Convention on Certain Conventional Weapons, the International Committee for the Red Cross, and advising various states’ governments and militaries. I’ve also been thinking very deeply over the last few years about how the rise, acceptance and deployment of artificial intelligence (AI) in both peacetime and wartime will change our lives. For these reasons, I’ve decided to leave academia “proper” and work in the private sector for one of the leading AI companies in the world. This decision means that I will no longer be able to blog as freely as I have in the past. As I leave, I’ve been asked to give a sort of “swan song” for the Duck and those who read my posts. Here is what I can say going forward for the discipline, as well as for our responsibilities as social scientists and human beings.
Sometimes when we look for a rallying call to join us as humans around a common cause or to show us our equal vulnerability, we say these trite sayings like “ Common-sense says that all men put their pants on one leg at a time.” This is supposed to reassure us that we are all equal in the most “animalistic” of ways (because you know, animals wear pants).
Here is the problem and the reality though: I cannot buy jeans that are not skinny jeans… shocking. What does that mean for the one-leg mantra? Well… as a woman- and a woman living in a world that tells most women that they have to be attractive… I can’t actually help but buy skinny jeans. SO! How do I—as feminist, as subject, as object—put my pants on? Truth be told… I put them on TWO LEGS at a time.
Where does this pseudo rant come from? From watching the decline of subtle thinking about gender, sex, and equality. After witnessing the tweet storm from President Trump about the ban on transgender military service, I think it is equally high time that we encourage reflection on all of the ways in which we as a society privilege a particular way of thinking about what is “normal.” For as Foucault teaches us, what is “normal” is merely the norm of behavior that coerces us into acting according to someone else’s standards. We self-censure because we want to be acceptable to the rest of society. We coerce ourselves into being something that we are not, merely for the approval or the acceptance by the rest.
It is not merely women that face this same fate, but men as well. Sex and gender become ropes in which we bind ourselves. Thus when we start to insist that all men ought to X, and all women ought to Y, we force a particular world view on those whose lives sit at intersections. Intersectionality, heterogeneity, and diversity are actually what produces progress. Beyond the brute fact that this sort of diversity allows for physical evolution of a species, we should also acknowledge that it produces beauty. As Plato reminds us that democracy is the “most beautiful” of all constitutions, like a “many colored cloak” because it has the most diverse population of people, so too does diversity of roles, tastes, pursuits, and genders in our society. Gender is not binary, though we see it most clearly when we put them in opposition. Gender is a practice, a performance, and a social construct. To prohibit or to “ban” a gender from a job is not only a violation of one’s basic rights to freedom of expression and speech, but to undercut the basic values upon which this country was founded.
So the next time someone wants to say “men are from mars, women are from venus,” or that “we all put our pants on one leg at a time,” I hope that you reflect on the fact that these seemingly innocuous tropes shackle us. For it is not true that sex determines how one thinks or acts. It is not true that all humans put their pants on one leg at a time. Nope, I, as a woman who identifies with femininity, try to buy jeans that fit me in a feminine way. But due to some interesting choices by society, that is by men and women in the majority, some pants force us to sit down, and put our pants on two legs at a time.
Every day it seems we hear more about the advancements of artificial intelligence (AI), the amazing progress in robotics, and the need for greater technological improvements in defense to “offset” potential adversaries. When all three of these arguments get put together, there appears to be some sort of magic alchemy that results in widely fallacious, and I would say pernicious, claims about the future of war. Much of this has to do, ultimately, with a misunderstanding about the limitations of technology as well as an underestimation of human capacities. The prompt for this round of techno-optimism debunking is yet another specious claim about how robotic soldiers will be “more ethical” and thus “not commit rape […] on the battlefield.”
There are actually three lines of thought here that need unpacking. The first involves the capabilities of AI with relation to “judgment.” As our above philosopher contends, “I don’t think it would take that much for robot soldiers to be more ethical. They can make judgements more quickly, they’re not fearful like human beings and fear often leads people making less than optional decisions, morally speaking [sic].” This sentiment about speed and human emotion (or lack thereof) has underpinned much of the debate about autonomous weapons for the last decade (if not more). Dr. Hemmingsen’s views are not original. However, such views are not grounded in reality.
Today’s revelation that Mike Flynn resigned from his post as National Security Advisor is another strong sign that the struggle between Truth and Politics is not a foregone conclusion. Indeed, we ought to actually celebrate the fact that when Flynn lied about speaking with the Russian ambassador, and the lie was made public, he was forced to resign. This victory notwithstanding, we still must be extremely vigilant against the Trump administration’s attack on Truth. For the administration apparently knew that he lied some time ago, and it was only with increased public scrutiny that Flynn submitted his resignation. Had that not come to light, the administration appears to have no compunction about employing liars.
In what follows, I will briefly argue that Hannah Arendt’s insights into Truth and Politics, as well as her understanding of power, authority, violence and persuasion are all key to helping us resist Trump and his acolytes. We are in a fragile time where the balance between freedom, reason, and truth may be overrun by domination, nonsense, and lies. We are on the precipice of what Arendt calls “organized lying,” where the community, or at least the governance structures and a portion of the community, seek to systematically erode any claims to factual or rational truths, and with that to unmoor the very foundations of our state.
In 2013, Bannon is reported to have told Ron Radosh of the Daily Beast that he was a Leninist. He is quoted as saying “Lenin wanted to destroy the state, and that’s my goal too. I want to bring everything crashing down, and destroy all of today’s establishment.” Yet this is such an odd thing to tell someone, particularly a journalist, when one’s very wealth, political power and caché depend on the very institution that he wants to destroy. Lenin, after all, wanted to bring down capitalism and the bourgeoisie to usher in the proletariat as leaders of a communist government and society. Lenin strongly believed in Marx’s Communist Manifesto, and with it the belief that the workers of the world, and not the owners of capital, must have the power. Only when all workers—men and women alike—are seen as equal and free will true freedom and democracy reign. Here is the problem, as I see it, with Bannon: he isn’t a Leninist, a Marxist, or a socialist. He is an incoherent miscellany of ideas, none of which he understands fully and all of which are dangerous when combined in a haphazard manner.
Having recently attended a workshop and conference on beneficial artificial intelligence (AI), one of the overriding concerns is how to design beneficial AI. To do this, the AI needs to be aligned with human values, and as such is known, pace Stuart Russell, as the “Value Alignment Problem.” It is a “problem” in the sense that however one creates an AI, the AI may try to maximize a value to the detriment of other socially useful or even noninstrumental values given the way one has to specify a value function to a machine.
This post is a co-authored piece:
Heather M. Roff, Jamie Winterton and Nadya Bliss of Arizona State’s Global Security Initiative
We’ve recently been informed that the Clinton campaign relied heavily on an automated decision aid to inform senior campaign leaders about likely scenarios in the election. This algorithm—known as “Ada”—was a key component, if not “the” component in how senior staffers formulated campaigning strategy. Unfortunately, we know little about the algorithm itself. We do not know all of the data that was used in the various simulations that it ran, or what its programming looked like. Nevertheless, we can be fairly sure that demographic information, prior voting behavior, prior election results, and the like, were a part of the variables as these are stock for any social scientist studying voting behavior. What is more interesting, however, is that we are fairly sure there were other variables that were less straightforward and ultimately led to Clinton’s inability to see the potential loss in states like Wisconsin and Michigan, and almost lose Minnesota.
But to see why “Ada” didn’t live up to her namesake (Ada, Countess of Lovelace, who is the progenitor of computing) is to delve into what an algorithm is, what it does, and how humans interact with its findings. It is an important point to make for many of us trying to understand not merely what happened this election, but also how increasing reliance on algorithms like Ada can fundamentally shift our politics and blind us to the limitations of big data. Let us begin, then, at the beginning.
Last week I was able to host and facilitate a multi-stakeholder meeting of governments, industry and academia to discuss the notions of “meaningful human control” and “appropriate human judgment” as they pertain to the development, deployment and use of autonomous weapons systems (AWS). These two concepts presently dominate discussion over whether to regulate or ban AWS, but neither concept is fully endorsed internationally, despite work from governments, academia and NGOs. On one side many prefer the notion of “control,” and on the other “judgment.”
Yet what has become apparent from many of these discussions, my workshop included, is that there is a need for an appropriate analogy to help policy makers understand the complexities of autonomous systems and how humans may still exert control over them. While some argue that there is no analogy to AWS, and that thinking in this manner is unhelpful, I disagree. There is one unique example that can help us to understand the nuance of AWS, as well how meaningful human control places limits on their use: marine mammal systems .
Rousseau once remarked that “It is, therefore, very certain that compassion is a natural sentiment, which, by moderating the activity of self-esteem in each individual, contributes to the mutual preservation of the whole species” (Discourses on Inequality). Indeed, it is compassion, and not “reason” that keeps this frail species progressing. Yet, this ability to be compassionate, which is by its very nature an other-regarding ability, is (ironically) the different side to the same coin: comparison. Comparison, or perhaps “reflection on certain relations” (e.g. small/big; hard/soft; fast/slow; scared/bold), also has the different and degenerative features of pride and envy. These twin vices, for Rousseau, are the root of much of the evils in this world. They are tempered by compassion, but they engender the greatest forms of inequality and injustice in this world.
Rousseau’s insights ought to ring true in our ears today, particularly as we attempt to create artificial intelligences to overtake or mediate many of our social relations. Recent attention given to “algorithm bias,” where the algorithm for a given task draws from either biased assumptions or biased training data yielding discriminatory results, I would argue is working the problem of reducing bias from the wrong direction. Many, the White House included, are presently paying much attention about how to eliminate algorithmic bias, or in some instance to solve the “value alignment problem,” thereby indirectly eliminating it. Why does this matter? Allow me a brief technological interlude on machine learning and AI to illustrate why eliminating this bias (a la Rousseau) is impossible.
A common argument made in favor of the use of robotics to deliver (lethal) force is that the violence used is mediated in such a way that it naturally de-escalates a situation. In some versions, this is due to the fact that the “robot doesn’t feel emotions,” and so is not subject to fear or anger. In other strands, the argument is that due to distance in time and space, human operators are able to take in more information and make better judgments, including to use less than lethal or nonlethal force. These debates have, up until now, mostly occurred with regards to armed conflict. However, with the Dallas police chief’s decision to use a bomb disposal robot to deliver lethal force to the Dallas gunman, we are now at a new dimension of this discussion: domestic policing.
Now, I am not privy to all of the details of the Dallas police force, nor am I going to argue that the decision to use lethal force against Micah Johnson was not justified. The ethics of self- and other-defense would argue that the Mr. Johnson’s actions and continued posturing of a lethal and imminent threat meant that officers were justified in using lethal force to protect themselves and the wider community. Moreover, state and federal law allows officers to use “reasonable” amounts of force, and not merely the minimal amount of force to carry out their duties. Thus I am not going to argue the ethics or the legality of the use of a robot to deliver a lethal blast to an imminent threat.
What is of concern, however, is how the arguments used in favor of increased use of robotics in situations of policing (or war) fail to take into consideration psychological and empirical facts. If we take these into account, what we might glean is that the trend actually goes in the other direction: that the availability and use of robotics may actually escalate the level of force used by officers.
The common understanding in military circles is that the more data one has, the more information one possess. More information leads to better intelligence, and better intelligence produces greater situational awareness. Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.
Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept. Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails. We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.
As many who read this blog will note, I am often concerned with the impact of weapons development on international security, human rights and international law. I’ve spent much time considering whether autonomous weapons violate international law, or will run us head long into arms races, or will give some incentives to oppress their peoples. Recently, however, I’ve started to think a bit less about future (autonomous) weapons and a bit more about new configurations of existing (semi-autonomous) weapons, and what those new configurations may portend. One article that came out this week in Defense One really piqued my interest in this regard: “Why the US Needs More Weapons that can be Quickly and Easily Modified.”
In 1941 Heinrich Himmler, one of the most notorious war criminals and mass murders, was faced with an unexpected problem: he could not keep using SS soldiers to murder the Jewish population because the SS soldiers were breaking psychologically. As August Becker, a member of the Nazi gas-vans, recalls,
“Himmler wanted to deploy people who had become available as a result of the suspension of the euthanasia programme, and who, like me, were specialists in extermination by gassing, for the large-scale gassing operations in the East which were just beginning. The reason for this was that the men in charge of the Einsatzgruppen [SS] in the East were increasingly complaining that the firing squads could not cope with the psychological and moral stress of the mass shootings indefinitely. I know that a number of members of these squads were themselves committed to mental asylums and for this reason a new and better method of killing had to be found.”
Much of the present debate over autonomous weapons systems (AWS) focuses on their use in war. On one side, scholars argue that AWS will make war more inhumane (Asaro, 2012), that the decision to kill must be a human being’s choice (Sharkey, 2010), or that they will make war more likely because conflict will be less costly to wage with them (Sparrow, 2009). On the other side, scholars argue that AWS will make war more humane, as the weapons will be greater at upholding the principles of distinction and proportionality (Müller and Simpson, 2014), as well as providing greater force protection (Arkin, 2009). I would, however, like to look at different dimension: authoritarian regimes’ use of AWS for internal oppression and political survival.
When the Soviets launched Sputnik in 1957, the US was taken off guard. Seriously off guard. While Eisenhower didn’t think the pointy satellite was a major strategic threat, the public perception was that it was. The Soviets could launch rockets into space, and if they could do that, they could easily launch nuclear missiles at the US. So, aside from a damaged US ego about losing the “space race,” the strategic landscape shifted quickly and the “missile gap” fear was born.
The US’s “strategic surprise” and the subsequent public backlash caused the US to embark on a variety of science and technology ventures to ensure that it would never face such surprise again. One new agency, the Advanced Research Projects Agency (ARPA), was tasked with generating strategic surprise – and guarding against it. While ARPA changed into DARPA (Defense Advanced Projects Agency) in the 1970s, its mission did not change.
DARPA has been, and still is, the main source of major technological advancement for US defense, and we would do well to remember its primary mission: to prevent strategic surprise. Why one might ask is this important to the students of international affairs? Because technology has always been one of the major variables (sometimes ignored) that affects relations between international players. Who has what, what their capabilities are, whether they can translate those capacities into power, if they can reduce uncertainty and the “fog and friction” of war, whether they can predict future events, if they can understand their adversaries, and on and on the questions go. But at base, we utilize science and technology to pursue our national interests and answer these questions.
I recently brought attention to the DoD’s new “Third Offset Strategy” in my last post. This strategy, I explained, is based on the assumption that scientific achievement and the creation of new weapons and systems will allow the US to maintain superiority and never fall victim to strategic surprise (again). Like the first and second offsets, the third wants to leverage advancements in physics, computer science, robotics, artificial intelligence and electrical and mechanical engineering to “kick the crap” out of any potential adversary.
Yet, aside from noting these requirements, what exactly, would the US need to do to “offset” the threats from Russia, China, various actors in the Middle East, terrorists (at home and abroad), and any unforeseen or “unknown unknowns?” I think I have a general idea, and if I am at all or even partially correct, we need to have a public discussion about this now.
In fall of 2014, former Defense Secretary Chuck Hagel announced his plan to maintain US superiority against rising powers (i.e. Russia and China). His claim was that the US cannot lose its technological edge – and thus superiority – against a modernizing Russia and a rapidly militarizing China. To ensure this edge, he called for the “third Offset Strategy.”
We are witnessing the horror of war. We see it every day, with fresh pictures of refugees risking their lives on the sea, rather than risking death by shrapnel, bombs, assassination or enslavement. For the past four years, over 11 million Syrians have left their homes; 4 million of them have left Syria altogether. Each day thousands attempt to get to a safer place, a better life for themselves and their children. Each day, the politics of resettlement and the fear of terrorism play their part.
The last major resettlement campaign in the US came after the Vietnam War. Over a 20-year period 2 million people from Laos, Cambodia and Vietnam were resettled into the US. The overall number of resettled refugees from this period is roughly about 3 million. Since the beginning of the civil war in Syria in 2011, Turkey alone has taken 2 million Syrian refugees within its borders. In short, Turkey has absorbed the same amount of war refugees in a four-year period that the US absorbed in five times the amount of time.
Turning to the Syrian case, which has produced the most refugees in any war in the past 70 years, we find a very dismal record of other than near neighbor resettlement. The Syrian conflict began in early 2011, and while the violence quickly escalated, I am taking the numbers of admitted Syrian refugees to the US starting in 2012. In 2012, the US admitted 35 Syrian refugees. In 2013, it admitted 48; in 2014, it admitted 1307. For 2015, the US is estimating admitting somewhere between 1000-2000 refugees. Even Canada, who tends to be more open with regard to resettlement and aid, has only admitted about 1300 refugees, pledging to admit 10,000 more by 2017. In short, since the beginning of this war, one of the most powerful countries in the world, with ample space and the economic capacity to admit more people, has admitted an estimated total of 2400 people, and its neighbor, a defender of human rights, has admitted about half that. Thinking the other way around, the US has agreed to take in .0006 % of the current population of Syrian refugees, and this number does not does not take into consideration the 7 million internally displaced people of Syria, or the simple fact that one country (Turkey) has absorbed 45%.
With all of the recent essays on the Duck this summer about the job market, citation indexes, and lack of confidence, there seems to be a brewing undercurrent about the anxiety of another academic year. Some of us maybe facing down a PhD defense and the job market for the first time, some of us compiling our pre-tenure review files, and some of us just generally feeling uneasy about a new area of research or a class we’ve never taught. Some maybe anxious about a new job they’ve recently arrived at. I can feel the collective tension reading through the posts and their comments.
I’d like to add one more perspective to the discussion in a hope to ease this tension. Much of what has been said before is from well within the “traditional” view of academia; a view where one has a tenure-track job or where one is attempting to get a tenure-track job. The reality is that getting or keeping these jobs is very difficult, and I cannot rehearse the myriad of factors that go into each. However, what I do think is important to note is that in these previous discussions there is a working assumption that once one is offered or has a tenure-track job, one will do anything to take or keep it. The Holy Grail must be achieved at all costs.
In late May, the People’s Republic of China (PRC) released a white paper on China’s Military Strategy. This public release is the first of its kind, and it has received relatively little attention in the broader media. While much of the strategy is of no big surprise (broad and sweeping claims to reunification of Taiwan with mainland China, China’s rights to territorial integrity, self-defense of “China’s reefs and islands,” a nod to “provocative actions” by some of its “offshore neighbors” (read Japan)), there was one part of the strategy that calls for a little more scrutiny: civil-military integration (CMI).
I have yet to weigh in on the recent hack on the Office of Personnel Management (OPM). Mostly this is due to two reasons. First is the obvious one for an academic: it is summer! But the second, well, that is due to the fact that as most cyber events go, this one continues to unfold. When we learned of the OPM hack earlier this month, the initial figures were 4 million records. That is, 4 million present and former government employees’ personal records were compromised. This week, we’ve learned that it is more like 18 million. While some argue that this hack is not something to be worried about, others are less sanguine. The truth of the matter is, we really don’t know. Coming out on one side or the other is a bit premature. The hack could be state-sponsored, where the data is squirreled away in a foreign intelligence agency. Or it could be state-sponsored, but the data could be sold off to high bidders on the darknet. Right now, it is too early to tell.
What I would like to discuss, however, is what the OPM hack—and many recent others like the Anthem hack—show in relation to thinking about cybersecurity and cyber “deterrence.” Deterrence as any IR scholar knows is about getting one’s adversary to not undertake some action or behavior. It’s about keeping the status quo. When it comes to cyber-deterrence, though, we are left with serious questions about this simple concept. Foremost amongst them is: Deterrence from what? All hacking? Data theft? Infrastructure damage? Critical infrastructure damage? What is the status quo? The new cybersecurity strategy released by the DoD in April is of little help. It merely states that the DoD wants to deter states and non-state actors from conducting “cyberattacks against U.S. interests” (10). Yet this is pretty vague. What counts as a U.S. interest?