Autonomous or "Semi" Autonomous Weapons? A Distinction without Difference

12 January 2015, 1645 EST

Over the New Year, I was fortunate enough to be invited to speak at an event on the future of Artificial Intelligence (AI) hosted by the Future of Life Institute. The purpose of the event was to think through the various aspects of the future of AI, from its economic impacts, to its technological abilities, to its legal implications. I was asked to present on autonomous weapons systems and what those systems portend for the future. The thinking was that an autonomous weapon is, after all, one run on some AI software platform, and if autonomous weapons systems continue to proceed on their current trajectory, we will see more complex software architectures and stronger AIs.   Thus the capabilities created in AI will directly affect the capabilities of autonomous weapons and vice versa. While I was there to inform this impressive gathering about autonomous warfare, these bright minds left me with more questions about the future of AI and weapons.

First, autonomous weapons are those that are capable of targeting and firing without intervention by a human operator. Presently there are no autonomous weapons systems fielded. However, there are a fair amount of semi-autonomous weapons systems currently deployed, and this workshop on AI got me to thinking more about the line between “full” and “semi.” The reality, at least the way that I see it, is that we have been using the terms “fully autonomous” and “semi-autonomous” to describe the extent to which the different operational functions on a weapons system are all operating “autonomously” or if only some of them are. Allow me to explain.

We have roughly four functions on a weapons system: trigger, targeting, navigation, and mobility. We might think of these functions like a menu that we can order from. Semi-autonomous weapons have at least one, if not three, of these functions. For instance, we might say that the Samsung SGR-1 has an “autonomous” targeting function (through heat and motion detectors), but is incapable of navigation, mobility or triggering, as it is a sentry-bot mounted on a defensive perimeter.   Likewise, we would say that precision guided munitions are also semi-autonomous, for they have autonomous mobility, triggering, and in some cases navigation, while the targeting is done through a preselected set of coordinates or through “painting” a target through laser guidance.

Where we seem to get into deeper waters, though, are in the cases of “fire and forget” weapons, like the Israeli Harpy, the Raytheon Maverick heavy tank missile, or the Israeli Elbit Opher. While these systems are capable of autonomous navigation, mobility, trigger and to some extent targeting, they are still considered “semi-autonomous” because the target (i.e. a hostile radar emitter or the infra-red image of a particular tank) was at some point pre-selected by a human. The software that guides these systems is relatively “stupid” from an AI perspective, as it is merely using sensor input and doing a representation and search on the targets it identifies.   Indeed, even Lockheed Martin’s L-RASM (long-range anti-ship missile), appears to be in this ballpark, though it is more sophisticated because it can select its own target amongst a group of potentially valid targets (ships). The question has been raised whether this particular weapon slides from semi-autonomous to fully autonomous, for it is unclear how (or by whom) the decision is made.

The rub in the debate over autonomous weapons systems, and from what I gather, some of the fear in the AI community, is the targeting software. How sophisticated that software needs to be to target accurately, and, what is more, to target objects that are not immediately apparent as military in nature.   Hostile radar emitters present little moral qualms, and when the image recognition software used to select a target is relying on infra-red images of tank tracks or ship’s hulls, then the presumption is that these are “OK” targets from the beginning. I have two worries here. First, is that from the “stupid” autonomous weapons side of things, military objects are not always permissible targets. Only by an object’s nature, purpose, location, use, and effective contribution can one begin to consider it a permissible target. If the target passes this hurdle, one must still determine whether attacking it provides a direct military advantage. Nothing in the current systems seems to take this requirement into account, and as I have argued elsewhere, future autonomous weapons systems would need to do so.

Second, from the perspective of the near term “not-so-stupid” weapons, at what point would targeting human combatants come into the picture? We have AI presently capable of facial recognition with almost near accuracy (just upload an image to Facebook to find out). But more than this, current leading AI companies are showing that artificial intelligence is capable of learning at an impressively rapid rate. If this is so, then it is not far off to think that militaries will want some variant of this capacity on their weapons.

What then might the next generation of “semi” autonomous weapons look like, and how might those weapons change the focus of the debate? If I were a betting person, I’d say they will be capable of learning while deployed, use a combination of facial recognition and image recognition software, as well as infra-red and various radar sensors, and they will have autonomous navigation and mobility. They will not be confined to the air domain, but will populate maritime environments and potentially ground environments as well. The question then becomes one not solely of the targeting software, as it would be dynamic and intelligent, but on the triggering algorithm. When could the autonomous weapon fire? If targeting and firing were time dependent, without the ability to “check-in” with a human, or let’s say, that there were just too many of these systems deployed that “checking-in” were operationally infeasible due to band-width, security, and sheer man-power overload, how accurate would the systems have to be to be permitted to fire? 80%? 50%? 99%? How would one verify that the actions taken by the system were in fact in accordance with its “programming,” assuming of course that the learning system doesn’t learn that its programming is hamstringing it to carry out its mission objectives better?

These pressing questions notwithstanding, would we still consider a system such as this “semi-autonomous?” In other words, the systems we have now are permitted to engage targets – that is target and trigger – autonomously based on some preselected criteria. Would these systems that utilize a “training data set” to learn from likewise be considered “semi-autonomous” because a human preselected the training data? Common sense would say “no,” but so far militaries may say “yes.”   The US Department of Defense, for example, states that a “semi-autonomous” weapon system is one that “once activated, is intended only to engage individual targets or specific target groups that have been selected by a human operator” (DoD, 2012). Yet, at what point would we say that “targets” are not selected by a human operator? Who is the operator? The software programmer with the training data set can be an “operator,” and the lowly Airman likewise can be an “operator” if she is the one ordered to push a button, so too can the Commander who orders her to push it (though, the current DoD Directive makes a distinction between “commander” and “operator” which problematizes the notion of command responsibility even further). The only policy we have on autonomy does not define, much to my dismay, “operator.” This leaves us in the uncomfortable position that distinction between autonomous and semi-autonomous weapons is one without difference, and taken to the extreme would mean that militaries would now only need to claim their weapons system is “semi-autonomous,” much to the chagrin of common sense.