SVG
Commentary

Nuclear "De-Alerting" and Effective Decision Time

These remarks were presented by Dr. Ford on November 11, 2010, at a conference at the Hoover Institution on “Deterrence: Its Past and Future.”

Senior Director for WMD and Counterproliferation, National Security Council

Good afternoon! My paper deals with the issue of deterrence through the prism of the issue of maximizing the decision time available to leaders in making nuclear weapons use decisions. I'd like to think is has some value in its own right, but I'd point out that I think this analysis may have at least some implications for the "virtual nuclear arsenals" debate too – for these decision-time and de-alerting questions overlap considerably, in an analytical sense, with "countervailing reconstitution" questions. Indeed, one might regard de-alerting as a subset of the reconstitution issue, or perhaps virtual nuclear arsenals as an extreme case of de-alerting.

Anyway, the jumping-off point for this exploration is the ongoing debate over nuclear force "de-alerting," but it's really part of my point that we should conceptualize the issue more broadly than that – namely, as a question of decision-making time on a broader timescale: from crisis escalation well before any potential attack warning, all the way through to after an attack may have actually occurred.

Anyway, as I hardly need to tell this audience – among you being not only numerous scholars and other experts but also the famous "Four Horsemen" of the modern disarmament debate, Messrs. Shultz, Perry, Kissinger, and Nunn, not to mention Lord Browne – the de-alerting discourse plunges one into the challenges of what one might call operationalizing deterrence. Specifically, it concerns the difficulty of balancing presumed deterrent considerations against the risks of accident, error, and loss of control.

As a heuristic, think of this debate as being between those who prioritize working to minimize two different types of risk. The first is "Type A" risk, namely, what one might call "advertence": things such as crisis stability, preemptive use incentives, etc. (Type A thinking is largely a deterrence discourse.) The second is "Type B" risks of "inadvertence," which focus upon accidents, errors, and so forth.

Neither side in the de-alerting debate is dismissive of either type of risk. At issue, rather, is the balance between them and tradeoffs that may have to be made in practice. Nuclear command and control is to a great extent a balancing act. In my paper, I try to conceive of it in terms of complexity theory: arrangements need to be tightly coupled enough to allow them to remain responsive to leadership control, yet loose enough that can handle perturbations. The "right answer" – if there is one as an organization hovers at peak organizational and adaptive fitness, on what complexity folks call the "edge of chaos" – is not a fixed point but a dynamic tension.

The fact that tradeoffs are often required as one addresses Type A or Type B risks is illustrated by the main thrust of the de-alerting debate. What I think of as the Blair/Sagan critique of current nuclear postures – and the core of the most compelling intellectual case for de-alerting – is that the nuclear command-and-control system is too tightly coupled and full of complicatedly nonlinear interactions to avoid, or be able to cope with, errors and accidents, particularly in the brief time between apparent warning of an incoming attack and the point by which one would have to launch ones alerted forces in order to get them out from under it.

Because of force vulnerabilities – and, Blair stresses, the fragility of the command systems which would be needed in order to mount and manage a retaliatory strike – nuclear states face incentives to adopt a de facto launch-on-warning (LOW) whether or not this if officially policy. This combination, they argue, is potentially lethal.

This critique obviously involves elements of Type A and Type B thinking, but its main focus is the reduction of Type B "inadvertence" risks. It stresses the need to expand the time before it is physically possible to launch weapons: extending this period, it is felt, will allow more chances to correct errors and think twice about launch.

The counter-narrative is one that stresses Type A concerns, though it does acknowledge Type B worries and tries to address them through remedies associated with what Scott Sagan has discussed as "high reliability" thinking (e.g., sensor redundancies). Type A counter-narrative proponents like preservation of the LOW option, even though they don't like LOW policy as a rule, because they think it supports the deterrence of aggression. And they worry about de-alerting because they fear instability associated with crisis-stability and re-alert or rearmament "race" issues.

The Type A narrative is thus also being about maximizing decision time. The anti-de-alerters, however, see de-alerting itself as being likely to constrict decision-making time by forcing leaders to take provocative steps earlier in a crisis than they would have to if they maintained forces that were already alerted. The two sides in the debate, therefore, are really just talking about different relevant time horizons.

My paper explores the tensions between these approaches, and suggests that de-alerting as a remedy is problematic precisely because it is a type of Type B solution that demands tradeoffs with Type A concerns on a quasi-zero-sum basis. Depending upon where one sits in the Type A or B camps, it might or might not be necessary actually to make this tradeoff. I suggest, however, that we should really look for solutions that do not demand a zero-sum tradeoff – and thus serve both Type A and Type B interests in positive sum way.

The closing sections of my paper argue in particular for reducing Type B risks in such a way, offering some suggestions. In particular, among other things, I argue for ways to address not the capability for LOW (as de-alerting argues) but in fact the incentives that may drive possessors toward finding it attractive.

In this regard, I suggest that ballistic missile defense (BMD) can help provide something of a buffer against accidental launches – or indeed false alarms suggesting limited attacks, such as the Russian scare involving the Norwegian sounding rocket.

I also argue for much more serious work on command-and-control system survivability of just the sort that Bruce Blair has argued has created dangerous LOW incentives. If Blair is right that command-system vulnerability is at the core of the incentive structure that removes the post-attack period from having real relevance to nuclear leaders, we should fix this by restoring the option of "ride out" that our expensive pursuit of survivable second-strike forces presupposes we need and desire. (Survivability may have seemed impossible in the Cold War context of massive nuclear "laydowns," but we should not assume that having an architecture capable of "ride out" is so Quixotic in today's world – or in a future world of shrinking arsenals.)

My focus is to maximize decision-making time by giving leaders more feasible options after the presumed time of enemy warhead impact in ways that do not remove the arguably still aggression-deterring capability of LOW. Simply put, we may want to retain LOW, but make it be less attractive – thus addressingboth Type A and Type B risks in a way upon which all sides in today's de-alerting debates may be able to agree.