When we hear the sound of hoofbeats, should we think horses or zebras? The question is a classic problem of intelligence analysis. Too often in recent years the CIA, FBI and Department of Homeland Security have got it wrong—most recently with the Christmas Day underwear bomber, who was able to board a U.S.-bound flight despite plenty of early warning signs. Political scientist Robert Jervis wants to know the reason for such error.
In Why Intelligence Fails, Mr. Jervis examines two important U.S. intelligence lapses and tries to account for what went awry. After both, the CIA hired Mr. Jervis—a longtime student of international affairs—to help the agency sort out its mistakes. He thus brings an invaluable perspective as a smart outsider with sufficient inside access to appraise the agency’s blind spots.
The first of his two cases is the CIA’s failure to grasp the weakness of the Iranian monarchy on the cusp of the Iranian revolution in 1979. “An island of stability” is what President Jimmy Carter called Iran just before the Islamic volcano erupted. No doubt the CIA estimates that Mr. Carter saw were not quite so ludicrously sanguine, but they were still dangerously inaccurate.
Mr. Jervis draws a striking portrait of an intelligence agency in disarray. He is particularly surprised by the “paucity of resources” dedicated to Iran in the late 1970s. The CIA had assigned just two analysts to assess Iranian politics and two more to study its economy, supplemented by a small, unproductive station in Tehran.
Making matters worse, the members of this tiny group were caught in a loop of circular reasoning. They were convinced that Iran’s burgeoning opposition was not a threat to Shah Mohammad Reza Pahlavi’s government because he had not cracked down on it. But as Mr. Jervis notes, the analysts’ key indicator of trouble—a crackdown—would occur only “if the crisis became very severe.” In the event, the crisis did become very severe—and the shah still did not crack down. The analysts relied on what turned out to be a worthless metric.
Even if the CIA’s analysts had not fallen into a logic-trap of their own devising, the agency would have faced a larger challenge. “Predicting revolutions is very hard,” Mr. Jervis aptly notes. Neither revolutionaries nor those in power know where their struggle is going. Why should outsiders have a better sense of what lies ahead? Foreign intelligence services are at a particular disadvantage: The “CIA and its counterparts are in the business of stealing secrets, but secrets are rarely at the heart of revolutions.”
Secrets did lie at the heart of Mr. Jervis’s second case: the intelligence community’s erroneous National Intelligence Estimate in October 2002 declaring that Iraq was accumulating weapons of mass destruction. Mr. Jervis examines the explanations offered for the mistake—perhaps the most closely studied intelligence lapse since Pearl Harbor—and finds them wanting.
Mr. Jervis rejects the contention that the CIA’s reporting was politicized, produced to align with whatever the Bush administration wanted to hear. “This narrative,” he writes, “conforms to common sense” but was refuted by several nonpartisan investigations and, implicitly, by the “uniform surprise—indeed disbelief” of the entire intelligence community when weapons of mass destruction were not found after the war. If anything, the charge of politicization “has been a barrier to more careful thought.” A host of other problems, Mr. Jervis says, explain the fiasco.
One was George Tenet’s mismanagement as director of the CIA. In the run-up to the war, a ferocious interagency battle raged about the significance of aluminum tubes that Iraq had been importing. The CIA believed they were intended for centrifuges to enrich uranium, hence part of Iraq’s supposed WMD effort. Other government entities, including the State Department’s Bureau of Intelligence and Research, insisted that the tubes were not well-suited to this purpose—which turned out to be the case. Although Mr. Tenet was responsible for coordinating the work of all 15 agencies of the U.S. intelligence community, he was, Mr. Jervis remarks, “physically, politically, and psychologically” distant from analysts outside the CIA. He thus did not even know about the long-running dispute until the intelligence estimate was actually being drafted, just months before the U.S. went to war.
Mr. Jervis calls Mr. Tenet’s inattention a “stunning failure.” But he concludes, surprisingly, that the aluminum-tube assessment and other theoretically “correctable errors”—like the CIA’s reliance on the worthless information funneled to it by the Iraqi defector known as “Curveball”—were not at the root of the problem. We might like to think that “bad outcomes are explained by bad processes and that fixing the intelligence machinery will solve the problems,” writes Mr. Jervis, but that is not always true.
In the case of Iraq’s weapons of mass destruction, the real failure lay not in the content of the National Intelligence Estimate but in the certainty—“slam dunk” was Mr. Tenet’s phrase—with which it was put forward. Such confidence gave policy makers little reason to pause. But even if they had paused—and posed hard questions—Mr. Jervis doubts that the intelligence community would have substantially revised its analysis. His conclusion is that the debacle was almost preordained by one overriding factor: The intelligence community’s interpretation of Saddam Hussein’s motives and behavior, however wrong it turned out to be, was “very plausible, much more so than the alternatives.”
In short, there are limits to the ability of intelligence agencies to understand the world and keep us safe. Those pounding hoofbeats might be horses or zebras—or zebras painted to look like horses.