New Paradigms Forum
June 3, 2010
by Christopher Ford
The newly-confirmed head of the newly-established U.S. Cyber Command, General Keith Alexander – whom I remember from my Senate Intelligence Committee days as the innovative and forward-leaning leader of the U.S. Army's Intelligence and Security Command after September 11, 2001 – spoke this morning at the Center for Strategic and International Studies (CSIS) about his new role. Reading a bit between the lines of his remarks, it seems clear that popular conceptions of the challenges that confront the U.S. Government in the arena of cyber conflict have so far missed some important points.
In accounts of cyber issues in the mainstream media, several themes receive emphasis. To take as an example the recent series on cyber warfare published by the New York Times, it seems pretty clearly to be understood that: (1) we face tremendous vulnerabilities to cyber attack, and our defenses are increasingly being tested by incessant probes and ever more sophisticated assaults; (2) traditional notions of deterrence face daunting challenges in cyberspace on account of the ease of concealing the true source of an attack; (3) cyber conflict presents unprecedented challenges in the tension it engenders between effective network surveillance and the protection of civil liberties; and (4) a new, global arms race between cyber offensive and defensive seems quietly to be getting underway.
These points all seem to be to be quite well taken. But the image of cyberwarfare still seems troublingly incomplete. From reading popular accounts, one might think that the most daunting issues in cyberspace are basically technical and programmatic ones: this is a campaign, it would appear, in which the key to success is the mobilization of enough computer-whiz "cyber commandos" and the devotion of sufficient resources to network protection and the development of software tools. To be sure, one cannot imagine success – or indeed anything except defeat – without such efforts. But in his comments at CSIS, General Alexander showed an appreciation for the policy challenges of cyberwar, and for the fact that these may ultimately prove as challenging as the more "technical" aspects of defensive and offensive network operations.
One of the issues about which Alexander spoke was the difficulty of acquiring an adequate "common operational picture" (COP) of the relevant portions of cyberspace in real time – that is, a view of the "battlespace" updated at what he described as "netspeed." So far, he said, our cyber-leaders do not have such situational awareness. I certainly believe him. Yet without slighting the phenomenal difficulty, as a technical matter, of actually achieving such situational awareness, it seems clear from Alexander's remarks that a different challenge also looms: even if we had ubiquitous, high-fidelity situational awareness, what do we do with it?
Some years ago, a U.S. Air Force colonel named John Boyd articulated a theory of military command and control that drew heavily upon insights from cybernetics, describing a decision-making cycle with four elements: Observation, Orientation, Decision, and Action. This phenomenon of the "OODA loop" – an acronym that became as common as it is ugly – was inherent in essentially all decision-making, but Boyd and others focused upon it in the context of military operations. In essence, it works as follows: in order to function, I must observe my operational environment, orient myself within it (e.g., both physically and with regard to my objectives), decide what the best next step should be in order to advance my goals, and then act to implement this decision. This produces a recursive cycling, for every action must be followed by another pass through the "loop," as I assess the impact of past steps and what is now necessary in light of my evolving environment. For Boyd and his followers, it was an important objective of military operations to be able to cycle through the OODA loop faster than could one's adversary. Getting "inside" his loop – that is, being able coherently to respond to the environment (and the enemy's own actions) faster than he can respond to it (and to yours) – can be vital to victory, and is in a sense the main objective of modern maneuver warfare. (Boyd, who died in 1997 at the age of 70, had been an instructor pilot for air-to-air dogfight tactics!)
Anyway, let's assume that with a suitable investment of time and money, and an infusion of top-notch human capital, we acquire the cyberspace situational awareness of which General Alexander spoke. Let us also assume that Cyber Command – which is by no coincidence co-located at Fort Meade with the electronic surveillance wizards of the National Security Agency, which Alexander also heads – has managed to acquire a powerful cyber "toolkit": not just monitoring technologies and network security tools but also means to identify attackers and (if necessary) move decisively against them. At best, however, such achievements would seem only to get us part-way through our own OODA challenges. With a detailed COP and a clear advance understanding of U.S. cyber objectives – and with good defensive and offensive tools at our disposal – we may hope to be well equipped for observing, orienting, and acting. But what about deciding?
In fact, actually making cyber decisions is not, in essence, a technical problem, and is thus not susceptible to entirely technical solutions. It is a policy problem, and needs to be addressed through a policy prism. It is, however, turning out to be one of very great difficulty.
The challenge has at least two aspects, which one might call scope and speed. (There are doubtless more, but let's stick to two for present purposes.) The problem of scope has to do with what decisions are whose responsibility. In traditional military conflict, these issues are relatively well understood. Individual soldiers and platoon commanders, for instance, might decide how best to defend the particular small parcel of territory they occupy, or how to assault some immediate objective. Battlefield commanders might direct various artillery, armor, infantry, rotary-wing aviation, and other assets present on "their" battlefield. Theater commanders might hammer out coordinated air tasking orders, identify theater-level objectives, and provide broad directives for lower-echelon leaders to execute, while national leaders (e.g., the president) would make strategic choices about such things as whether to attack a particular country at all, whether to bomb leadership targets in a capital city, and so forth. So far so good.
But how does one translate long-understood concepts of operational authority into cyberspace? The evolving world of Pentagon cyber-planning is a secretive one, for good reason, but one hears it increasingly said that one of the biggest struggles today is in this area.
What, for instance, is the dividing line in cyber warfare between "tactical" decisions – that is, things that should be left to battlefield commanders – and "strategic" operations? It is easy to leave to (comparatively) low levels of command questions such as whether to jam an enemy air defense radar unit that threatens one's aerial assets. But what if the best tool for attacking such a system happens to be the injection of incapacitating or manipulative software code into the computers that control the adversary's air defense? (It is widely understood that such attacks are possible, and not merely by "firing" them through the Internet from a computer thousands of miles away. Malicious code can apparently be beamed into a system more locally, for instance, by the radar of an attacking aircraft, state-of-the-art models of which are said to be capable of accessing a system through its own wireless networking in order to inject algorithms that allow an attacker to shut down or even hijack the system's computer brain.)
If the target is an air defense system, such a "cyber"-related electronic attack sounds pretty tactical – and therefore something best left to commanders on the scene – but cyberspace isn't known for its rigorous respect for geographic boundaries. Unlike a mere bomb attack or regular radio-frequency jamming operations, cyber attacks upon networked systems might not always have only "local" effects. As I mentioned in NPF's first posting on cyberwar, it has been reported that U.S. commanders opted not to use certain cyber-attacks against Iraq in 1991, Serbia in 1999, and Iraq in 2003 for fear of spin-off effects in interconnected international banking, communications, and financial systems. According to a U.S. general quoted in Aviation Week & Space Technology, for instance, American planners did not use one particular cyber technique to disable the French-made Iraqi air defense network because "[w]e were afraid we were going to take down all the automated banking machines in Paris."
We are apparently only just beginning to come to grips with the challenges of "collateral damage" in cyberspace. As a result, it can be very difficult to identify the appropriate scope of operational authority up and down the echelons of command. This may be one of the reasons why, despite the theoretical availability of very sophisticated and dangerous offensive software code, reported instances of cyber-attack in conflicts between states – e.g., in Russia's moves against Georgia in 2008 – so often still involve relatively simple distributed denial-of-service attacks of the sort that overwhelm specific websites by the sheer volume of queries or other interactions. (Such methods arguably minimize collateral damage because they do not necessarily involve the use of destructive code that can risk an uncontrolled "contagion" to other parts of the Internet.)
These issues sound technical, but they are also policy challenges. If we are not certain of narrowly-confined "tactical" effects, does every decision to use a cyber-technique have to be made as a "strategic" military choice – for example, by the President himself? What leeway is it permissible to give to military commanders at each level of authority? Since the problems of networked "collateral damage" are very much at issue, moreover, how to we translate traditional military operational law conceptions of necessity, proportionality, and discrimination between combatant and civilian targets into the idiosyncratic world of networked cyber operations? Such issues are said to be the subject of ongoing and intense debate within the Pentagon, and no doubt General Alexander's new Cyber Command.
The issue of who must make what decision also brings us quickly to the second challenge I'd like to mention here: speed. As noted, Colonel Boyd's "OODA loop" puts a premium upon one's ability to cycle rapidly and coherently through the process of observation, orientation, decision, and action: woe be unto the combatant whose adversary has a faster OODA reaction time. In the cyber arena, one doesn't need ungainly acronyms and cybernetic theories to understand this: attacks can be mounted using coordinated networks of computers acting as fast as their processors can churn out commands – and that is very fast indeed. (According to General Alexander, experts believe that some 247 billion e-mails are currently sent every day, more than 80 percent of which are simply electronic "spam" generally dispatched by computers rather than by any actual human "sender.") Using automated tools, would-be attackers can move with astonishing rapidity. Alexander recounted that computer systems belonging to the U.S. Department of Defense are "probed" some 250,000 times every hour – that is, a mind-boggling six million times a day.
To put it bluntly, the transactional speed of networked computer interactions raises difficult questions about the degree to which it is wise, or even possible, to keep human decision-makers fully "in the loop" for cyber operations. Even if one could manage the challenges of command-decision scope with respect to mounting various types of cyber attack, it seems clear that at least some of the tasks involved in defensive cyber operations may have to be undertaken too fast for any meaningful involvement – at least initially – by a human decision-maker. Furthermore, as both attack and defensive methodologies grow in sophistication, we may not be able to take it for granted that there will remain a crisp line between the two.
It is easy to conceptualize network defense as being akin to crouching behind a shield, and offense as stabbing with a sword or throwing a spear. Through this lens, one might hypothesize that it would be acceptable to have an automated defense but yet require offensive operations to occur only with affirmative permission from an authorized human decision-maker. But what if things were more complicated than that? There may, for instance, be cyber-analogues to shoving or striking an adversary with a shield, or parrying his blow with one's sword – or even to the old cowboy movie trick of shooting the gun out of an outlaw's hand. As is to some extent already true in the fluid battlespace of modern maneuver warfare on physical terrain, it may be unwise to assume that cyber "offense" and "defense" are cleanly separable (or even intelligible) as distinct and different functions. It does not appear that our policy apparatus is yet prepared to wrestle with such dilemmas.
At one point in his CSIS remarks, General Alexander was asked about the challenges of time-urgent response in the cyber context. In reply, he suggested the need for clear, standing rules of engagement (ROEs) – which is military jargon for pre-established standards for what sort of action military servicemembers at any specified level of command are authorized take on their own discretion when confronted by particular circumstances. (ROEs, for instance, might govern when an infantryman may fire upon a seemingly hostile crowd, or when airmen can attack other aircraft or engage ground targets. Such rules set forth, in effect, when it is not necessary to wait for higher authority to approve using force.) According to Alexander, cyber conflict presents unprecedented challenges by requiring operators to function and adapt to situations at "netspeed." This, he suggested, will require much use of "automated" decision-making in cyber operations – within the scope, presumably, of highly-detailed standing ROEs. Individual human operators, to say nothing of hierarchic decision-making trees of military or political leadership, may simply be unable to act and respond quickly enough for effective cyber operations. I have no reason to doubt that this is true, but if it is, we will clearly have much soul-searching to do as we approach warmaking in this unforgiving environment.
At present, Alexander emphasized, we lack both the good real-time situational awareness and the "necessary precision" in our standing cyber-ROEs that we will need to meet these challenges. He might also have added, however, that we lack a clear awareness within the policy community of the ways in which cyber conflict will tax our traditional approaches to the policy aspects of military strategy and warfighting. Military planning staffs are today only just beginning to struggle with the policy, legal, ethical, and security strategy challenges of scope and speed in cyber operations, and the broader public policy community cannot afford to ignore these issues either.
General Alexander seems an excellent fit for his job, but as he himself recognizes, these questions reach matters above his new four-star pay grade. These are issues of top-shelf public policy import that will need to be much better understood, and in many cases more clearly addressed, by the White House, Congress, the policy community, and the public at large. There is naturally much about cyber war that one should not discuss in public, but we surely cannot go too long without a clearer and broadly-shared vision of a national cyber strategy – or even what it means to have a cyber strategy – and without a sound conceptual framework to help shape our struggles with the vexing and decidedly nontraditional issues that arise in this arena.
Christopher A. Ford was formerly Senior Fellow and Director of the Center for Technology and Global Security at Hudson Institute.
Home | Learn About Hudson | Hudson Scholars | Find an Expert | Support Hudson | Contact Information | Site Map
Policy Centers | Research Areas | Publications & Op-Eds | Hudson Bookstore
Hudson Institute, Inc. 1015 15th Street, N.W. 6th Floor Washington, DC 20005
Phone: 202.974.2400 Fax: 202.974.2410 Email the Webmaster
© Copyright 2013 Hudson Institute, Inc.