The Undoing Project: A Friendship that Changed the World

The Oregon researchers noticed, as the Hebrew University professors had noticed, that whatever Amos and Danny were talking about must be funny, as they spent half their time laughing. They bounced back and forth between Hebrew and English and broke each other up in both. They happened to be in Eugene, Oregon, surrounded by joggers and nudists and hippies and forests of Ponderosa pine, but they could just as well have been in Mongolia. “I don’t think either of them was attached to physical location,” said Slovic. “It didn’t matter where they were. All that mattered were the ideas.” Everyone also noticed the intense privacy of their conversation. Before they had arrived in Eugene, Amos had made some faint noises about including Paul Slovic in the collaboration, but once Danny arrived it became clear to Slovic that he didn’t belong. “We weren’t a threesome together much,” he said. “They didn’t want anyone else in the room.”

In a funny way, they didn’t even want themselves in the room. They wanted to be the people they became when they were with each other. Work, for Amos, had always been play: If it wasn’t fun, he simply didn’t see the point in doing it. Work now became play for Danny, too. This was new. Danny was like a kid with the world’s best toy closet who is so paralyzed by indecision that he never gets around to enjoying his possessions but instead just stands there worrying himself to death over whether to grab his Super Soaker or take his electric scooter out for a spin. Amos rooted around in Danny’s mind and said, “Screw it, we’re going to play with all of this stuff.” There would be times, later in their relationship, when Danny would go into a deep funk—a depression, almost—and walk around saying, “I’m out of ideas.” Even that Amos found funny. Their mutual friend Avishai Margalit recalled, “When he heard that Danny was saying, ‘I’m finished, I’m out of ideas,’ Amos laughed and said, ‘Danny has more ideas in one minute than a hundred people have in a hundred years.’” When they sat down to write they nearly merged, physically, into a single form, in a way that the few people who happened to catch a glimpse of them found odd. “They wrote together sitting right next to each other at the typewriter,” recalls Michigan psychologist Richard Nisbett. “I cannot imagine. It would be like having someone else brush my teeth for me.” The way Danny put it was, “We were sharing a mind.”

Their first paper—which they still half-thought of as a joke played on the academic world—had shown that people faced with a problem that had a statistically correct answer did not think like statisticians. Even statisticians did not think like statisticians. “Belief in the Law of Small Numbers” had raised an obvious next question: If people did not use statistical reasoning, even when faced with a problem that could be solved with statistical reasoning, what kind of reasoning did they use? If they did not think, in life’s many chancy situations, like a card counter at a blackjack table, how did they think? Their next paper offered a partial answer to the question. It was called . . . well, Amos had this thing about titles. He refused to start a paper until he had decided what it would be called. He believed the title forced you to come to grips with what your paper was about.

And yet the titles that he and Danny put on their papers were inscrutable. They had to play, at least in the beginning, by the rules of the academic game, and in that game it wasn’t quite respectable to be easily understood. Their first attempt to describe how people formed judgments they titled “Subjective Probability: A Judgment of Representativeness.”? Subjective probability—a person might just make out what that meant. Subjective probability meant: the odds you assign to any given situation when you are more or less guessing. Look outside the window at midnight and see your teenage son weaving his way toward your front door, and say to yourself, “There’s a 75 percent chance he’s been drinking”—that’s subjective probability. But “A Judgment of Representativeness”: What the hell was that? “Subjective probabilities play an important role in our lives,” they began. “The decisions we make, the conclusions we reach, and the explanations we offer are usually based on our judgments of the likelihood of uncertain events such as success in a new job, the outcome of an election, or the state of a market.” In these and many other uncertain situations, the mind did not naturally calculate the correct odds. So what did it do?

The answer they now offered: It replaced the laws of chance with rules of thumb. These rules of thumb Danny and Amos called “heuristics.” And the first heuristic they wanted to explore they called “representativeness.”

When people make judgments, they argued, they compare whatever they are judging to some model in their minds. How much do those clouds resemble my mental model of an approaching storm? How closely does this ulcer resemble my mental model of a malignant cancer? Does Jeremy Lin match my mental picture of a future NBA player? Does that belligerent German political leader resemble my idea of a man capable of orchestrating genocide? The world’s not just a stage. It’s a casino, and our lives are games of chance. And when people calculate the odds in any life situation, they are often making judgments about similarity—or (strange new word!) representativeness. You have some notion of a parent population: “storm clouds” or “gastric ulcers” or “genocidal dictators” or “NBA players.” You compare the specific case to the parent population.

Amos and Danny left unaddressed the question of how exactly people formed mental models in the first place, and how they made judgments of similarity. Instead, they said, let’s focus on cases where the mental model that people have in their heads is fairly obvious. The more similar the specific case is to the notion in your head, the more likely you are to believe that the case belongs to the larger group. “Our thesis,” they wrote, “is that, in many situations, an event A is judged to be more probable than an event B whenever A appears more representative than B.” The more the basketball player resembles your mental model of an NBA player, the more likely you will think him to be an NBA player.

They had a hunch that people, when they formed judgments, weren’t just making random mistakes—that they were doing something systematically wrong. The weird questions they put to Israeli and American students were designed to tease out the pattern in human error. The problem was subtle. The rule of thumb they had called representativeness wasn’t always wrong. If the mind’s approach to uncertainty was occasionally misleading, it was because it was often so useful. Much of the time, the person who can become a good NBA player matches up pretty well with the mental model of “good NBA player.” But sometimes a person does not—and in the systematic errors they led people to make, you could glimpse the nature of these rules of thumb.

For instance, in families with six children, the birth order B G B B B B was about as likely as G B G B B G. But Israeli kids—like pretty much everyone else on the planet, it would emerge—naturally seemed to believe that G B G B B G was a more likely birth sequence. Why? “The sequence with five boys and one girl fails to reflect the proportion of boys and girls in the population,” they explained. It was less representative. What is more, if you asked the same Israeli kids to choose the more likely birth order in families with six children—B B B G G G or G B B G B G—they overwhelmingly opted for the latter. But the two birth orders are equally likely. So why did people almost universally believe that one was far more likely than the other? Because, said Danny and Amos, people thought of birth order as a random process, and the second sequence looks more “random” than the first.

Michael Lewis's books