The Duck of Minerva

The Duck Quacks at Twilight

Cyber warfare and legal responsibility: drifting further apart?

September 23, 2010

Two cyber warfare trends are catching the eye, but both raise the same major question. First, cyber attacks have been democratised in recent years because of social media and easy to use denial of service attack (DDoS) tools. Popular armies have returned, made up not of a mass of bodies charging, a Clausewitzian centre of gravity on a field, but constituted by curious and enthusiastic citizens on the internet. As William Merrin argued at a keynote in 2009, security has been crowdsourced. US officials set up webcams along the Mexico border so that citizens can sit at leisure and watch for shadowy figures moving through the desert (and they do watch). Other national leaders have encouraged citizens to launch DDoS attacks against strategic targets. Sometimes, ordinary people just feel the urge to participate without any guidance, for instance the ‘Help Israel win’ group of students who targeted Hamas in the 2008-09 Gaza conflict. If thousands or even millions of people act collectively this way, where does legal responsibility lie for any harm caused? Is there legal responsibility for encouraging people to participate? Are people using digital media today out of patriotic gusto in ways that will later incriminate them?

Second, news media have reported a new super-cyber-weapon this week, the first digital nuke, apparently capable of destroying real-world objects. Previous malware just shut down systems or stole data. Once this new piece of malware touches a digital system (e.g. through a USB stick) the malware itself secretly takes control of the system, and can make it destroy whatever it is managing – a bank, a nuclear plant, whatever you can imagine. The designer can tell it what to target, but thereafter the software does its own thing. In terms of responsibility, whoever funds, designs and delivers such a weapon would seem the locus of responsibility. But not many nations have the expertise to detect such software. Successful attacks would just seem like industrial mishaps. Expect reports of mystery explosions near you (especially if you live in Iran).

Where does this leave international law? We’ve caught up with World War II and the regulation of mass armies and nukes. Who has the technical expertise, political will and diplomatic savvy to draw up laws for a world of crowdsourced armies and weaponized software?

(Cross posted from the New Political Communication Unit blog)
+ posts

Ben O'Loughlin is Professor of International Relations at Royal Holloway, University of London. He is Director of the New Political Communication Unit, which was launched in 2007. Before joining Royal Holloway in September 2006 he was a researcher on the ESRC New Security Challenges Programme.