If you thought that K was, say, twice as likely to appear as the first letter of an English word than as the third letter, you checked the first box and wrote your estimate as 2:1. This was what the typical person did, as it happens. Danny and Amos replicated the demonstration with other letters—R, L, N, and V. Those letters all appeared more frequently as the third letter in an English word than as the first letter—by a ratio of two to one. Once again, people’s judgment was, systematically, very wrong. And it was wrong, Danny and Amos now proposed, because it was distorted by memory. It was simply easier to recall words that start with K than to recall words with K as their third letter.
The more easily people can call some scenario to mind—the more available it is to them—the more probable they find it to be. Any fact or incident that was especially vivid, or recent, or common—or anything that happened to preoccupy a person—was likely to be recalled with special ease, and so be disproportionately weighted in any judgment. Danny and Amos had noticed how oddly, and often unreliably, their own minds recalculated the odds, in light of some recent or memorable experience. For instance, after they drove past a gruesome car crash on the highway, they slowed down: Their sense of the odds of being in a crash had changed. After seeing a movie that dramatizes nuclear war, they worried more about nuclear war; indeed, they felt that it was more likely to happen. The sheer volatility of people’s judgment of the odds—their sense of the odds could be changed by two hours in a movie theater—told you something about the reliability of the mechanism that judged those odds.
They went on to describe nine other equally odd mini-experiments that got at various tricks that memory might play on judgment. Danny thought of them as very much like the optical illusions the Gestalt psychologists he had loved in his youth planted in their texts. You saw them and were fooled by them and wanted to know why. He and Amos were dramatizing tricks of the mind rather than tricks of the eye, but the effect was similar, and the material available to them appeared to be even more abundant. They read lists of people’s names to Oregon students, for instance. Thirty-nine names, read at a rate of two seconds per name. The names were all easily identifiable as male or female. A few were the names of famous people—Elisabeth Taylor, Richard Nixon. A few were names of slightly less famous people—Lana Turner, William Fulbright. One list consisted of nineteen male names and twenty female names, the other of twenty female names and nineteen male names. The list that had more female names on it had more names of famous men, and the list that had more male names on it contained the names of more famous women. The unsuspecting Oregon students, having listened to a list, were then asked to judge if it contained the names of more men or more women.
They almost always got it backward: If the list had more male names on it, but the women’s names were famous, they thought the list contained more female names, and vice versa. “Each of the problems had an objectively correct answer,” Amos and Danny wrote, after they were done with their strange mini-experiments. “This is not the case in many real-life situations where probabilities are judged. Each occurrence of an economic recession, a successful medical operation, or a divorce, is essentially unique, and its probability cannot be evaluated by a simple tally of instances. Nevertheless, the availability heuristic may be applied to evaluate the likelihood of such events. “In judging the likelihood that a particular couple will be divorced, for example, one may scan one’s memory for similar couples which this question brings to mind. Divorces will appear probable if divorces are prevalent among the instances that are retrieved in this manner.”
The point, once again, wasn’t that people were stupid. This particular rule they used to judge probabilities (the easier it is for me to retrieve from my memory, the more likely it is) often worked well. But if you presented people with situations in which the evidence they needed to judge them accurately was hard for them to retrieve from their memories, and misleading evidence came easily to mind, they made mistakes. “Consequently,” Amos and Danny wrote, “the use of the availability heuristic leads to systematic biases.” Human judgment was distorted by . . . the memorable.
Having identified what they took to be two of the mind’s mechanisms for coping with uncertainty, they naturally asked: Are there others? Apparently they were unsure. Before they left Eugene, they jotted down some notes about other possibilities. “The conditionality heuristic,” they called one of these. In judging the degree of uncertainty in any situation, they noted, people made “unstated assumptions.” “In assessing the profit of a given company, for example, people tend to assume normal operating conditions and make their estimates contingent upon that assumption,” they wrote in their notes. “They do not incorporate into their estimates the possibility that these conditions may be drastically changed because of a war, sabotage, depressions, or a major competitor being forced out of business.” Here, clearly, was another source of error: not just that people don’t know what they don’t know, but that they don’t bother to factor their ignorance into their judgments.
Another possible heuristic they called “anchoring and adjustment.” They first dramatized its effects by giving a bunch of high school students five seconds to guess the answer to a math question. The first group was asked to estimate this product:
8 × 7 × 6 × 5 × 4 × 3 × 2 × 1
The second group to estimate this product:
1 × 2 × 3 × 4 × 5 × 6 × 7 × 8
Five seconds wasn’t long enough to actually do the math: The kids had to guess. The two groups’ answers should have been at least roughly the same, but they weren’t, even roughly. The first group’s median answer was 2,250. The second group’s median answer was 512. (The right answer is 40,320.) The reason the kids in the first group guessed a higher number for the first sequence was that they had used 8 as a starting point, while the kids in the second group had used 1.
It was almost too easy to dramatize this weird trick of the mind. People could be anchored with information that was totally irrelevant to the problem they were being asked to solve. For instance, Danny and Amos asked their subjects to spin a wheel of fortune with slots on it that were numbered 0 through 100. Then they asked the subjects to estimate the percentage of African countries in the United Nations. The people who spun a higher number on the wheel tended to guess that a higher percentage of the United Nations consisted of African countries than did those for whom the needle landed on a lower number. What was going on here? Was anchoring a heuristic, the way that representativeness and availability were heuristics? Was it a shortcut that people used, in effect, to answer to their own satisfaction a question to which they could not divine the true answer? Amos thought it was; Danny thought it wasn’t. They never came to sufficient agreement to write a paper on the subject. Instead they dropped it into summaries of their work. “We had to stick anchoring in, because the result was so spectacular,” said Danny. “But as a result we wound up with a vague notion of what a heuristic is.”