New Paradigms Forum
December 29, 2010
by Christopher Ford
One hears a great deal these days about the dawn of an age of cyber warfare, but until recently the most concrete examples of it seemed relatively crude. To be sure, distributed denial-of-service (DDOS) attacks on Georgian websites by anonymous – and likely Russian-organized – hackers do seem to have been executed in a fashion coordinated with the Kremlin’s military offensive against its Caucasian neighbor in 2008, freezing websites and clogging government and banking computer networks in ways useful to the attackers. (Fascinatingly, as attackers re-routed some traffic from Georgian websites through Russia-based web servers, some Georgian government sites relocated themselves to U.S.-based web servers – perhaps because these systems were more resistant to attack, or perhaps simply in a gamble that their Russian assailants would not follow them there for fear of provoking a Russo-American cyber conflict.)
Yet in comparison to assaults such as actually blowing up a building, cyber-attack has been slow to show very effective “teeth” – at least on the public record. Computer-based espionage and cyber-crime have certainly already shown tremendous sophistication, and the development of methods for gaining entry to computer networks for such purposes has naturally led to speculation about specialist attack code that could lie in wait for an activation signal from a belligerent, thereupon corrupting data, crashing systems, or even hijacking and manipulating computers in ways hidden from their owners. So far, however, those who know most about such tools aren’t saying too much, and most of the rest of the world is stuck with speculation.
Indeed, one has often heard it said that computer attack tools present great problems as weapons – at least for legalistic and democratically accountable warfighters such as the U.S. military – on account of the difficulty of predicting and controlling their effects. According to media reporting, for example, American military planners opted not to use certain cyber-attacks against Iraqi and Serbian air defense and command-and-control networks in the 1990s and in 2003 for fear of what would happen if such malicious code propagated through the Internet. (As one U.S. general put it, we did not use cyber-tools to crash the French-made Iraqi air defense network because “[w]e were afraid we were going to take down all the automated banking machines in Paris.”)
To the extent that such fears of “runaway” impact make computer-launched attacks self-deterring, it might turn out to be the case that some types of propagating-code-based cyber-assault are more useful to “rogue” state belligerents and non-state actors (especially semi-autarkic ones such as North Korea that have a below-average degree of Internet dependency themselves) than to law-abiding states who actually care about compliance with the law of armed conflict (LOAC). A key question for the development of cyber-war as a “legitimate” form of warfighting, therefore, is the degree to which such tools can be honed in order to fit more cleanly within traditional LOAC conceptions of discrimination – that is, the degree to which one can confine their impact to enemy combatants and minimize harm to noncombatants. The less a particular tool can be held to such standards of discriminate impact, the more controversial it is likely to be, and the greater the likelihood that a legally-responsible combatant will decline actually to use it.
This is, of course, an issue that “kinetic” warfighters have struggled with for many years. The city-bombing campaigns of the Second World War are today quite controversial, though the technology of the time (not even the famous Norden bombsight) did not, in truth, allow too many options. Even if one confined oneself to attacking “military” targets – which even the Allies, alas, did not always do (e.g., in their fire-bombings of Dresden and numerous Japanese cities) – high-level bombing did not allow much precision, and with thousands of aircraft releasing scores of thousands of unguided bombs from altitude over densely-populated enemy territory, civilian casualties were bound to be considerable. Doing the best they could under the circumstances might have kept Allied generals and air marshalls on the right side of the law, such as it was at the time, but it is less clear that a modern industrialized democracy with today’s technology would be given such bloody leeway in the 21st Century.
Today, it is the practice of such countries to work very hard to meet very exacting standards of discriminate impact. Much is made, politically, of the “collateral damage” to civilians that is sometimes caused by NATO airstrikes in Afghanistan, or U.S. drone attacks in Pakistan or Yemen, or Israeli “targeted killings” of terrorist chieftains in Gaza or elsewhere. In historical context, however, the precision and lack of civilian impact of the kinetic tools now preferred by the modern West are extraordinary – and they are improving all the time. The Pentagon, for instance, is presently spending considerabe sums of money on building ever smaller air-delivered bombs, precisely because it is now possible to land them on even a moving target with such accuracy and reliability that only a small explosion is needed in order to achieve the desired military effect. (This lets delivery platforms carry more bombs, allows weapons to be carried on smaller platforms such as drones, and minimizes civilian damage.)
Indeed, for some kinetic applications one apparently no longer need high explosives at all. In attacking Iraqi air defense sites during the long years of enforcing the pre-war “No Fly Zone” there in the 1990s, for instance, U.S. aviators sometimes dropped guided bombs filled with cement rather than high explosives: such weapons were perfectly capable of smashing a radar tower or control building to bits with no explosive “bang” at all – and they weren’t going to hurt anyone not actually right in their path. Concepts even exist today for putting a solid metal warhead (or a bundle of kinetic energy penetrators such as tungsten rods) aboard a ballistic missile – a type of one-shot weapon that would rely upon extraordinary accuracy to destroy a high-value target by physically hitting it from thousands of miles away.
One can argue about whether the LOAC requires such demanding precision and controllability, but there is no question that Western warfighters are getting better and better at providing it. “Smart bombs” may have made their public debut during the Gulf War of 1991, but they were still a small proportion of the total ordinance delivered by U.S. aircraft during that conflict. Nowadays, the air campaigns in Iraq and Afghanistan are principally conducted using precision-guided weaponry. You’d never know it from reading politically-charged (and sometimes -inspired) press coverage, but in historical terms the civilians on our battlefields have never had it so good. (Just ask a Chechen or a Tamil Sri Lankan how much fun it is to live in a war zone when the dominant military either lacks such munitions or simply doesn’t bother to use them.) As a general rule, our principal mode of war is now ever more akin to sniping than to lobbing a hand grenade – and the significance of these developments, from a humanitarian and a LOAC perspective, is seldom sufficiently appreciated.
Such discrimination issues also lie close to the heart of many complaints made about weapons of mass destruction (WMD), the effects of which – as the name indicates – can be notoriously hard to limit to combatants. “Countervalue” targeting of civilian populations with nuclear weapons – though actually a necessity if one buys traditional “mutual assured destruction” and “minimal deterrence” arguments, which may come back into fashion as disarmament activists push for ever-smaller arsenals – is famously controversial, and often said to be almost illegally barbaric. (Some commentators would omit my “almost,” too.) Biological weaponry based upon spreading highly infectious agents presents perhaps the asymptotic example of uncontrollability, even to the point of making such tools useless in traditional military terms.
To be sure, one could hypothesize more “controllable” and potentially “discriminate” applications of WMD technology – perhaps bioengineered diseases, or exotic chemicals or toxins designed to have only temporary or non-lethal effects. Nuclear weapons, moreover, do not have inherently “uncontrollable” effects anyway, insofar as each weapon only creates a single explosion. The “uncontrollability” about which one worries in the context of nuclear use is not that a detonation would itself create some kind of physical chain reaction, but that other possessors would react to it by choosing to use their own nuclear devices. Nuclear war might be “uncontrollable,” therefore, but one cannot really say that a nuclear weapon is. (This was a question raised at the dawn of the nuclear age, to be sure, but it has long since been answered.) But whether or not any particular tool is intrinsically indiscriminate, some of them clearly remain much harder than others to use in ways that are consistent with modern understandings of the law of war. To the extent that this is true, it may tend to make some possible uses of highly destructive weapons self-deterring, at least for a conscientious, LOAC-mindful possessor. (The unscrupulous are likely to care less about being indiscriminate, of course.)
So where does cyber-attack fit in to all this? Until recently, one could have hypothesized that cyber-attack would move along a continuum from “uncontrollability” of collateral impact to something more analogous to the precision of modern “kinetic” weaponry, but it would have been hard to point to actual examples of such a shift. DDOS assaults are generally quite discriminate, in the sense that they attack only specific websites. Such assaults, however, don’t usually involve propagating destructive computer code into the target’s systems. (DDOS attacks often use huge “bot-nets” of third-party computers, which may themselves be hijacked for this purpose through the insertion of malicious code, but this is arguably a somewhat different phenomenon. Such “slave” computers are in a sense made into weapons; they are not the target, and are presumably not otherwise affected.) While as their name implies, DDOS assaults clearly can temporarily impede Internet service, moreover, they are less known for having any particular effect upon the underlying hardware and software systems, or upon actual physical capital in the “real” (non-cyber) world.
The more interesting LOAC challenge has arisen in connection with the use of invasive malicious code in order directly to attack an adversary’s systems or the infrastructure they control. So long as one runs a risk of taking down some third-party’s banking system – or who knows what? – when employing such a tool as a weapon of war, perhaps the best cyber-analogy for malicious code is indeed the sort of infectious biological agent that the jargon of computer “viruses” might suggest.
This is why I think the recent example of the “Stuxnet” computer “worm” is so interesting. It may represent a considerable leap in cyber-war’s evolution along the aforementioned continuum of discrimination. Let’s take a look.
According to press accounts, Stuxnet – which was first publicly identified in June 2010 – is in some regards just another widely-propagating computer virus. It appears to attach itself to a particular proprietary software package made by the German company Siemens as a “supervisory control and data acquisition” (SCADA) management system for industrial plants. The Stuxnet code propagates itself as widely as possible in searching for this particular software (called “WinCC”), and when it gets access to a computer system running the program, it tries to install itself and open a clandestine “back door” to the Internet. As of early October, Stuxnet was said to have infected more than 45,000 computers around the world.
So far, so unexciting. There are, unfortunately, many examples of malicious code that propagates in such a manner – though it is presumably unusual for hackers to target a software package as obscure as WinCC. Stuxnet doesn’t usually seem actually to do much of anything once its installs itself, but this is also not too remarkable. In this era of cyber-crime and cyber-espionage, is hardly unheard of for malicious software to lie dormant on the hard drives of unsuspecting computer users, awaiting its creator’s command to exfiltrate data or conduct some other activity.
Connoisseurs of cyber-warfare seem to be both awed and horrified by the sophistication and elaborateness of the coding job that produced the Stuxnet worm. As Jonathan Last detailed not long ago in The Weekly Standard, Stuxnet is apparently a really remarkable achievement.
The really intriguing thing about Stuxnet, however, is that it seems to have been designed to target a particular industrial facility. The Siemens WinCC software is just its access route: the Stuxnet code is apparently designed to propagate itself around the world searching for one specific industrial facility that happens to use WinCC for its SCADA management system. Stuxnet may show up on thousands of computers in hundreds of industrial plants, in other words, but this is incidental: it was apparently looking for a single, particular target.
According to reports, Stuxnet is programmed to recognize a highly specific configuration of WinCC-managed valves, pipelines, and industrial equipment – the “fingerprint,” if you will, of one particular facility somewhere in the world, to the blueprints of which the code’s creators seem to have had some prior access. As to what this target was, media speculation centered quickly on Iran’s uranium enrichment facility at Natanz, or perhaps the new Russian-built nuclear power reactor at Bushehr.
This now appears to have been the case. Stuxnet, it would appear, carries two “warheads.” First, the computer worm seems to target an industrial control sub-system used at Iran’s new Bushehr nuclear power plant, apparently degrading the steam turbine there by running it wrong while telling the control room that all is well. (Iranian officials claim no harm was actually done, however.) More significantly, it seems to have caused considerable – but stealthy – problems at Iran’s uranium enrichment facility at Natanz. Stuxnet reportedly searches the systems upon which it finds itself, looking for specific frequency converter drives made by two firms, one in Finland and one in Iran, that run at high speeds corresponding to those at which uranium enrichment centrifuges operate. According to Jonathan Last, the worm systematically altered the frequencies at which the Natanz centrifuges turned, stressing the machinery and causing increased breakdown rates and faulty output in ways that were for a long time as mysteriously untraceable as they were frustrating. (Nice trick!) This “slow burn” of degraded operations seems to have gone on for some time, and might have continued for much longer had Stuxnet not been identified in the West, sending Iranian programmers on a search for the virus in their own SCADA networks. According to one computer expert recently quoted in the media, the affair may have set Iran’s program back by as much as two years. (Debugging the system may also take some time. The Economist quotes one researcher suggesting that Iran would do better simply to throw out all the infected computers and install new ones.)
If all this is true – and here I must admit to relying only on media reporting of this notoriously murky subject – then Stuxnet may indeed represent the first public example of malicious code evolving along the path that state-of-the-art “kinetic” weaponry has taken since the end of the Second World War. Stuxnet may not be the first example of inserting software code that is genuinely “discriminate” in its impact. (There is, for instance, the reported case of U.S. success in providing doctored equipment to a Soviet front company in order to cause the catastrophic explosive failure of a Siberian natural gas pipeline in 1982.) It would seem to be the first public example, however, of the use of widely-propagating code as a “delivery vehicle” that is nonetheless genuinely discriminating in its destructive impact.
It is not clear whether there is a good “kinetic” analogue for what one might call the “Stuxnet model.” (One might imagine, I suppose, a cruise missile with infinite range that flies endlessly and randomly around the world until it happens to see the specific target it has been programmed to destroy?) We may, however, now be seeing real examples of the emergence of invasive code as a weapon of war that is LOAC-compliant and not nearly so “self-deterring” as one might have feared such worms and viruses to be on the basis of past conflicts. Cyber-warfare may be turning a portentous corner.
Christopher A. Ford was formerly Senior Fellow and Director of the Center for Technology and Global Security at Hudson Institute.
Click here to view the full list of Journal Articles, Op-Eds & Blogs.
Home | Learn About Hudson | Hudson Scholars | Find an Expert | Support Hudson | Contact Information | Site Map
Policy Centers | Research Areas | Publications & Op-Eds | Hudson Bookstore
Hudson Institute, Inc. 1015 15th Street, N.W. 6th Floor Washington, DC 20005
Phone: 202.974.2400 Fax: 202.974.2410 Email the Webmaster
© Copyright 2013 Hudson Institute, Inc.