Today, Human Rights Watch (HRW) released a new report, Losing Humanity: The Case Against Killer Robots, becoming the most influential NGO to date to join an emerging call for a preemptive norm against the deployment of autonomous weapons.
The International Committee for Robot Arms Control, an association of scientists and philosophers, proposed such a norm back in 2010 at a meeting in Berlin; the outgoing President of the International Committee of the Red Cross (ICRC), Jakob Kellenberger, issued a statement in September 2011 suggesting the organization remained seized of developments in the area of autonomous weapons; Article36.org was the first NGO to suggest an outright ban in April of this year. Indeed, some members of the scholarly community have been calling for a ban since at least 2004.
But the adoption of the issue by Human Rights Watch signals a watershed moment in an emerging global concern with the outsourcing of targeting decisions to machines, for three reasons.
First, Human Rights Watch will bring unprecedented visibility and legitimacy to what was a previously somewhat fringe issue due to the organization’s credibility and centrality within global civil society. As I previously noted in an article still theoretically relevant though now substantively out of date (which used the absence of a campaign against autonomous weapons as a case study), the presence or absence of HRW and ICRC from weapons campaigns is the single most important variable explaining why some campaigns become prominent globally and others remain marginalized.
Second, although HRW was not the first to call for a killer robot ban, they are the first NGO to publish, in partnership with Harvard’s International Human Rights Clinic, a comprehensive report on the topic. This is important because as significant as whether an organization “adopts” an issue is how they adopt it. A full report is more meaningful than a press release or a meeting or an agenda item on a website. It signals a commitment of thought and resources. It conveys heft. It suggests that Human Rights Watch is positioning itself to lead a broad-based humanitarian disarmament coalition around this issue, one likely to include many of its former partners from the landmines and cluster campaigns.
Human Rights Watch’s report has also significantly honed the “killer robot” frame. Previously, concern with fully autonomous weapons was conflated with concerns over drones; and arguments against killer robots ran the gamut from “robots can’t discriminate” to “robots make wars more likely” to whether there should be a “human right to be killed only by other humans.” Losing Humanity distinguishes clearly between weapons with a human “in-the-loop,” “on-the-loop,” and “out-of-the-loop,” focusing primarily on fully autonomous weapons. And it focuses the lens on the one dimension of this issue likeliest to speak to the humanitarian law community and to militaries: protection of civilians.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose, Arms Division director at Human Rights Watch. “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.”
For all these reasons, we can now expect the killer robots issue to move from the sidelines to the center of the humanitarian disarmament agenda for the foreseeable future. I will have more to say about the arguments in the report after I’ve read it more closely. And since autonomous weapons have been one of several cases that I’ve followed since 2007 for my new book on human security campaigns I will have more to say about its origins in days to come – stay tuned.