In his talk to the historians, Amos described their occupational hazard: the tendency to take whatever facts they had observed (neglecting the many facts that they did not or could not observe) and make them fit neatly into a confident-sounding story:
All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence. This “ability” to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe that there is a less uncertain world than there actually is, and that we are less bright than we actually might be. For if we can explain tomorrow what we cannot predict today, without any added information except the knowledge of the actual outcome, then this outcome must have been determined in advance and we should have been able to predict it. The fact that we couldn’t is taken as an indication of our limited intelligence rather than of the uncertainty that is in the world. All too often, we feel like kicking ourselves for failing to foresee that which later appears inevitable. For all we know, the handwriting might have been on the wall all along. The question is: was the ink visible?
It wasn’t just sports announcers and political pundits who radically revised their narratives, or shifted focus, so that their stories seemed to fit whatever had just happened in a game or an election. Historians imposed false order upon random events, too, probably without even realizing what they were doing. Amos had a phrase for this. “Creeping determinism,” he called it—and jotted in his notes one of its many costs: “He who sees the past as surprise-free is bound to have a future full of surprises.”
A false view of what has happened in the past makes it harder to see what might occur in the future. The historians in his audience of course prided themselves on their “ability” to construct, out of fragments of some past reality, explanatory narratives of events which made them seem, in retrospect, almost predictable. The only question that remained, once the historian had explained how and why some event had occurred, was why the people in his narrative had not seen what the historian could now see. “All the historians attended Amos’s talk,” recalled Biederman, “and they left ashen-faced.”
After he had heard Amos explain how the mind arranged historical facts in ways that made past events feel a lot less uncertain, and a lot more predictable, than they actually were, Biederman felt certain that his and Danny’s work could infect any discipline in which experts were required to judge the odds of an uncertain situation—which is to say, great swaths of human activity. And yet the ideas that Danny and Amos were generating were still very much confined to academia. Some professors, most of them professors of psychology, had heard of them. And no one else. It was not at all clear how two guys working in relative obscurity at Hebrew University could spread the word of their discoveries to people outside their field.
In the early months of 1973, after their return to Israel from Eugene, Amos and Danny set to work on a long article summarizing their findings. They wanted to gather in one place the chief insights of the four papers they had already written and allow readers to decide what to make of them. “We decided to present the work for what it was: a psychological investigation,” said Danny. “We’d leave the big implications to others.” He and Amos both agreed that the journal Science offered them the best hope of reaching people in fields outside of psychology.
Their article was less written than it was constructed. (“A sentence was a good day,” said Danny). As they were building it, they stumbled upon what they saw as a clear path for their ideas to enter everyday human life. They had been gripped by “The Decision to Seed Hurricanes,” a paper coauthored by Stanford professor Ron Howard. Howard was one of the founders of a new field called decision analysis. Its idea was to force decision makers to assign probabilities to various outcomes: to make explicit the thinking that went into their decisions before they made them. How to deal with killer hurricanes was one example of a problem that policy makers might use decision analysts to help address. Hurricane Camille had just wiped out a large tract of the Mississippi Gulf Coast and obviously might have done a lot more damage—say, if it had hit New Orleans or Miami. Meteorologists thought they now had a technique—dumping silver iodide into the storm—to reduce the force of a hurricane, and possibly even alter its path. Seeding a hurricane wasn’t a simple matter, however. The moment the government intervened in the storm, it was implicated in whatever damage that storm inflicted. The public, and the courts of law, were unlikely to give the government credit for what had not happened, for who could say with certainty what would have happened if the government had not intervened? Instead the society would hold its leaders responsible for whatever damage the storm inflicted, wherever it hit. Howard’s paper explored how the government might decide what to do—and that involved estimating the odds of various outcomes.
But the way the decision analysts elicited probabilities from the minds of the hurricane experts was, in Danny and Amos’s eyes, bizarre. The analysts would present the hurricane seeding experts inside government with a wheel of fortune on which, say, a third of the slots were painted red. They’d ask: “Would you rather bet on the red sector of this wheel or bet that the seeded hurricane will cause more than $30 billion of property damage?” If the hurricane authority said he would rather bet on red, he was saying that he thought the chance the hurricane would cause more than $30 billion of property damage was less than 33 percent. And so the decision analysts would show him another wheel, with, say, 20 percent of the slots painted red. They did this until the percentage of red slots matched up with the authority’s sense of the odds that the hurricane would cause more than $30 billion of property damage. They just assumed that the hurricane seeding experts had an ability to correctly assess the odds of highly uncertain events.
Danny and Amos had already shown that people’s ability to judge probabilities was queered by various mechanisms used by the mind when it faced uncertainty. They believed that they could use their new understanding of the systematic errors in people’s judgment to improve that judgment—and, thus, to improve people’s decision making. For instance, any person’s assessment of probabilities of a killer storm making landfall in 1973 was bound to be warped by the ease with which they recalled the fresh experience of Hurricane Camille. But how, exactly, was that judgment warped? “We thought decision analysis would conquer the world and we would help,” said Danny.
The leading decision analysts were clustered around Ron Howard in Menlo Park, California, at a place called the Stanford Research Institute. In the fall of 1973 Danny and Amos flew to meet with them. But before they could figure out exactly how they were going to bring their ideas about uncertainty into the real world, uncertainty intervened. On October 6, the armies of Egypt and Syria—with troops and planes and money from as many as nine other Arab countries—launched an attack on Israel. Israeli intelligence analysts had dramatically misjudged the odds of an attack of any sort, much less a coordinated one. The army was caught off guard. On the Golan Heights, a hundred or so Israeli tanks faced fourteen hundred Syrian tanks. Along the Suez Canal, a garrison of five hundred Israeli troops and three tanks were quickly overrun by two thousand Egyptian tanks and one hundred thousand Egyptian soldiers. On a cool, cloudless, perfect morning in Menlo Park, Amos and Danny heard the news of the shocking Israeli losses. They raced to the airport for the first flight back home, so that they might fight in yet another war.
* * *