Don’t Confuse Me With the Facts: Costs of Lethal Autonomous Weapons

11 June 2014, 1122 EDT

robojeepThis is a guest post by Heather Roff-Perkins, Visiting Associate Professor at the Josef Korbel School, University of Denver.

On June 12, Christof Heyns, the UN special rapporteur on extrajudicial, summary or arbitrary executions, will brief the United Nations Human Rights Council on the human rights implications of lethal autonomous weapons.  Last month, member states were likewise briefed by panels of experts at an informal meeting under the auspices of the Convention on Conventional Weapons (CCW), which Charli Carpenter has blogged about here.

Much of the discussion pertaining to lethal autonomous weapons, or “killer robots,” revolves around the implications for international humanitarian law, particularly whether they will be able of discriminating between combatants and civilians, or whether they will be used to violate human rights.  Little attention, however, is paid to the realities of the costs of such systems and whether they will be operationally useful or advantageous. 

Perhaps this is due to the fact that cost is an elusive thing to track when it comes to any weapon system.  This is mainly because one company, research lab or contractor, rarely carries out the research and design of such a system.  As a system, it is comprised of many different components, from software to sensors to munitions.  For example, as early as 1983 the US Department of Defense attempted to revolutionize machine intelligence through its Strategic Computing Program (SCP), which sought to harness artificial intelligence so that the DoD could develop autonomous land vehicles, pilot’s associates and a naval battle manager (Gray, 1997).  That program, which ended in 1993 without meeting its goals, spent $1 billion.  Yet, jump ahead to 2008 when Carnegie Mellon University and Defense Advanced Research Projects Agency (DARPA) unveiled the “Crusher” – an army autonomous land vehicle.  Research taking place in the twenty-five year gap between SCP and the Crusher did not happen in a vacuum, but alongside other researchers and other innovations.

Academics, moreover, are less and less isolated from industry as well.  Just look at Georgia Tech’s Institute for Robotics and Artificial Intelligence: they partner not only with the DoD, but also BMW, the Boeing Company, Intel, General Motors and SRI International, and many of these partners may not publish what they are spending on their projects.   Thus it is almost impossible to trace the vast amounts of money being spent on any one project at any one time because the technology rarely develops in isolation.

Yet even if we could gather all of the relevant data on costs estimates, there is still another problem: upkeep and innovation.  Take, for instance, the Aegis weapons system developed by Lockheed Martin.  The project, initiated in 1968, is still receiving money from the US Navy.  Forty odd years later, the system is still undergoing development so that it can keep up with new weapons developments.   Thus any “killer robot” fielded by any military will have not only had a long history, with billions of dollars spent on its development, it will continue to require maintenance and undergo continual development so that it remains cutting edge.

How, though, are we to understand the vast amounts of money invested or thrown at this technology amidst defense budget cuts?  When the DoD estimates that it will have to cut 420,000 soldiers from the Army, 175,000 from the Marine Corps, retire 80 Air Force aircraft, and decommission an aircraft carrier and its air wing, how are we to understand the need or desire to create more robotic weapons systems?

The simple answer is: faith that technology will yield battlefield superiority and make one’s warfighters safer.   In a rather candid moment, Russia’s Deputy Prime Minister recently stated “”We have to conduct battles without any contact, so that our boys do not die, and for that it is necessary to use war robots.” This blind faith that technology will mean that one can fight (and win) wars without any costs to oneself is really the science fiction at play here.  For now we have another type of cost entering the calculus.

Up until this point, many governments and academics have espoused the benefits of autonomous weapons – that they will lower the costs of war, both in monetary and human terms (Arkin, 2009; Anderson and Waxman 2012).  They claim that one doesn’t have to feed, clothe, shelter or pay for retirement for robots, and that these robots will be so much better at killing the right people and saving the innocent civilian population from hot headed and blood thirsty soldiers.  Robots, one expert at the CCW meetings claimed, do not rape, so that is another good reason to use them, instead of their (presumably male) human counterparts in conflict.

However, espousing how many lives are saved is really a ruse.  It isn’t about the civilian population; it is about the human boots that don’t want to be put on the ground. States would rather spend billions of dollars and decades of research on creating a machine to do the dirty work of killing.  And the belief that such technology is problem free is pernicious.  As early as 1985 two academics warned, “once vast sums of money have been spent, the temptation will be great to justify the expenditure by installing questionable AI-based technology in a variety of critical contexts—from data reduction to battle management (Dreyfus and Dreyfus, 1985).”  Examples of fielding weapons that were not operationally useful or beneficial abound.

Which leaves us with the second point: operational benefits of lethal autonomous systems.  Such systems, to be at least minimally competent in contemporary conflict, would require the ability to be interoperable, or “talk to,” all other systems.  “Interoperability allows forces, units or systems to operate together. It requires them to share common doctrine and procedures, each others’ infrastructure and bases, and to be able to communicate with each other” (NATO, 2012).  In other words, any killer robot in a zone of operations needs to be in constant communication with all other systems, so as to avoid any “friendly fire” mishaps, to target the “right” people, and to receive orders from a commander.  This means that the system will be networked and sending signals.

Two conclusions result from this.  First, I am hard pressed to see many militaries training their robots alongside each other so that they can be effectively interoperable.  A state military may have trouble running enough realistic simulations between its Navy, Air Force and Army in a joint robot operation to see how these machines will truly act, let alone when there are different militaries involved. Second, I harbor serious doubts about the ability of the machines to adequately receive human commands and carry them out.  With states developing GPS-jamming capabilities, and the ever-present threat of cyber attacks, the usefulness of these machines when they are functionally blind seems dubious at best.

The take home message seems rather clear at this point.  The main motivation for creating these weapons is to gain tactical advantage without risking one’s own troops in the process.  However, the extent to which this is possible is highly doubtful.  The monetary costs of these systems will be (and already are) astronomical, and they will not lessen over time because of the need to constantly develop superior weapons.  Just as science and innovation do not take place in isolation, weapons procurement and arms races do not either.   That these systems will be suspect –from the very beginning—in operational terms is also a serious hindrance.   If the machines cannot function in complex and degraded environments (i.e. war), then they are useless.  If we need a human to go in and fix them, or recover them when they are downed, then it is equally useless to claim that they are saving “our boys” lives.