The Undoing Project: A Friendship that Changed the World

Adult minds were too self-deceptive. Children’s minds were a different matter. Danny created a course in judgment for elementary school children, Amos briefly taught a similar class to high school students, and they put together a book proposal. “We found these experiences highly encouraging,” they wrote. If they could teach Israeli kids how to think—how to detect their own seductive but misleading intuition and to correct for it—who knew where it might lead? Perhaps one day those children would grow up to see the wisdom of encouraging Henry Kissinger’s next efforts to make peace between Israel and Syria. But this, too, they never followed through on. They never went broad. It was as if the temptation to address the public interfered with their interest in each other’s minds.

Instead, Amos invited Danny to explore the question that had kept Amos interested in psychology: How did people make decisions? “One day, Amos just said, ‘We’re finished with judgment. Let’s do decision making,’” recalled Danny.

The distinction between judgment and decision making appeared as fuzzy as the distinction between judgment and prediction. But to Amos, as to other mathematical psychologists, they were distinct fields of inquiry. A person making a judgment was assigning odds. How likely is it that that guy will be a good NBA player? How risky is that triple-A-rated subprime mortgage–backed CDO? Is the shadow on the X-ray cancer? Not every judgment is followed by a decision, but every decision implies some judgment. The field of decision making explored what people did after they had formed some judgment—after they knew the odds, or thought they knew the odds, or perhaps had judged the odds unknowable. Do I pick that player? Do I buy that CDO? Surgery or chemotherapy? It sought to understand how people acted when faced with risky options.

Students of decision making had more or less given up on real-world investigations and reduced the field to the study of hypothetical gambles, made by subjects in a lab, in which the odds were explicitly stated. Hypothetical gambles played the same role in the study of decision making that the fruit fly played in the study of genetics. They served as proxies for phenomena impossible to isolate in the real world. To introduce Danny to his field—Danny knew nothing about it—Amos gave him an undergraduate textbook on mathematical psychology that he had written with his teacher Clyde Coombs and another Coombs student, Robyn Dawes, the researcher who had confidently and incorrectly guessed “Computer scientist!” when Danny handed him the Tom W. sketch in Oregon. Then he directed Danny to a very long chapter called “Individual Decision Making.”

The history of decision theory—the textbook explained to Danny—began in the early eighteenth century, with dice-rolling French noblemen asking court mathematicians to help them figure out how to gamble. The expected value of a gamble was the sum of its outcomes, each weighted by the probability of its occurring. If someone offers you a coin flip, and you win $100 if the coin lands on heads but lose $50 if it lands on tails, the expected value is $100 × 0.5 + (-$50) × 0.5, or $25. If you follow the rule that you take any bet with a positive expected value, you take the bet. But anyone with eyes could see that people, when they made bets, didn’t always act as if they were seeking to maximize their expected value. Gamblers accepted bets with negative expected values; if they didn’t, casinos wouldn’t exist. And people bought insurance, paying premiums that exceeded their expected losses; if they didn’t, insurance companies would have no viable business. Any theory pretending to explain how a rational person should take risks must at least take into account the common human desire to buy insurance, and other cases in which people systematically failed to maximize expected value.

The major theory of decision making, Amos’s textbook explained, had been published in the 1730s by a Swiss mathematician named Daniel Bernoulli. Bernoulli sought to account a bit better than simple calculations of expected value for how people actually behaved. “Let us suppose a pauper happens to acquire a lottery ticket by which he may with equal probability win either nothing or 20,000 ducats,” he wrote, back when a ducat was a ducat. “Will he have to evaluate the worth of the ticket as 10,000 ducats, or would he be acting foolishly if he sold it for 9,000 ducats?” To explain why a pauper would prefer 9,000 ducats to a 50-50 chance to win 20,000, Bernoulli resorted to sleight of hand. People didn’t maximize value, he said; they maximized “utility.”

What was a person’s “utility”? (That odd, off-putting word here meant something like “the value a person assigns to money.”) Well, that depended on how much money the person had to begin with. But a pauper holding a lottery ticket with an expected value of 10,000 ducats would certainly experience greater utility from 9,000 ducats in cash.

“People will choose whatever they most want” is not all that helpful as a theory to predict human behavior. What saved “expected utility theory,” as it came to be called, from being so general as to be meaningless were its assumptions about human nature. To his assumption that people making decisions sought to maximize utility, Bernoulli added an assumption that people were “risk averse.” Amos’s textbook defined risk aversion this way: “The more money one has, the less he values each additional increment, or, equivalently, that the utility of any additional dollar diminishes with an increase in capital.” You value the second thousand dollars you get your hands on a bit less than you do the first thousand, just as you value the third thousand a bit less than the second thousand. The marginal value of the dollars you give up to buy fire insurance on your house is less than the marginal value of the dollars you lose if your house burns down—which is why even though the insurance is, strictly speaking, a stupid bet, you buy it. You place less value on the $1,000 you stand to win flipping a coin than you do on the $1,000 already in your bank account that you stand to lose—and so you reject the bet. A pauper places so much value on the first 9,000 ducats he gets his hands on that the risk of not having them overwhelms the temptation to gamble, at favorable odds, for more.

This was not to say that real people in the real world behaved as they did because they had the traits Bernoulli ascribed to them. Only that the theory seemed to describe some of what people did in the real world, with real money. It explained the desire to buy insurance. It distinctly did not explain the human desire to buy a lottery ticket, however. It effectively turned a blind eye to gambling. Odd this, as the search for a theory about how people made risky decisions had started as an attempt to make Frenchmen shrewder gamblers.

Amos’s text skipped over the long, tortured history of utility theory after Bernoulli all the way to 1944. A Hungarian Jew named John von Neumann and an Austrian anti-Semite named Oskar Morgenstern, both of whom fled Europe for America, somehow came together that year to publish what might be called the rules of rationality. A rational person making a decision between risky propositions, for instance, shouldn’t violate the von Neumann and Morgenstern transitivity axiom: If he preferred A to B and B to C, then he should prefer A to C. Anyone who preferred A to B and B to C but then turned around and preferred C to A violated expected utility theory. Among the remaining rules, maybe the most critical—given what would come—was what von Neumann and Morgenstern called the “independence axiom.” This rule said that a choice between two gambles shouldn’t be changed by the introduction of some irrelevant alternative. For example: You walk into a deli to get a sandwich and the man behind the counter says he has only roast beef and turkey. You choose turkey. As he makes your sandwich he looks up and says, “Oh, yeah, I forgot I have ham.” And you say, “Oh, then I’ll take the roast beef.” Von Neumann and Morgenstern’s axiom said, in effect, that you can’t be considered rational if you switch from turkey to roast beef just because they found some ham in the back.

Michael Lewis's books