The Undoing Project: A Friendship that Changed the World

At some point, Danny and Amos became aware that they had a problem on their hands. Their theory explained all sorts of things that expected utility failed to explain. But it implied, as utility theory never had, that it was as easy to get people to take risks as it was to get them to avoid them. All you had to do was present them with a choice that involved a loss. In the more than two hundred years since Bernoulli started the discussion, intellectuals had regarded risk-seeking behavior as a curiosity. If risk seeking was woven into human nature, as Danny and Amos’s theory implied that it was, why hadn’t people noticed it before?

The answer, Amos and Danny now thought, was that intellectuals who studied human decision making had been looking in the wrong places. Mostly they had been economists, who directed their attention to the way people made decisions about money. “It is an ecological fact,” wrote Amos and Danny in a draft, “that most decisions in that context (except insurance) involve mainly favorable prospects.” The gambles that economists studied were, like most savings and investment decisions, choices between gains. In the domain of gains, people were indeed risk averse. They took the sure thing over the gamble. Danny and Amos thought that if the theorists had spent less time with money and more time with politics and war, or even marriage, they might have come to different conclusions about human nature. In politics and war, as in fraught human relationships, the choice faced by the decision maker was often between two unpleasant options. “A very different view of man as a decision maker might well have emerged if the outcomes of decisions in the private-personal, political or strategic domains had been as easily measurable as monetary gains and losses,” they wrote.



* * *





Danny and Amos spent the first half of 1975 getting their theory into shape so that a rough draft might be shown to other people. They started with the working title “Value Theory” but then changed it to “Risk-Value Theory.” For a pair of psychologists who were attacking a theory erected and defended mainly by economists, they wrote with astonishing aggression and confidence. The old theory, they wrote, didn’t really even consider how actual human beings grappled with risky decisions. All it did was “to explain risky choices solely in terms of attitudes to money or wealth.” Between the lines, the reader could detect their giddiness. “Amos and I are in the middle of our most productive period ever,” Danny wrote to Paul Slovic, in early 1975. “We’re developing what appears to us to be a rather complete and quite novel account of choice under uncertainty. The regret treatment has been superseded by a sort of reference level or adaptation level treatment.” Six months later, Danny wrote Slovic that they had a prototype of a new theory of decision making. “Amos and I barely managed to finish a paper on risky choice in time to present it to an illustrious group of economists who convene in Jerusalem this week,” he wrote. “It is still fairly rough.”

The meeting in question, billed as a conference on public economics, convened in June 1975 at a kibbutz just outside Jerusalem. And so it was on a farm that a theory that would become among the most influential in the history of economics made its public debut. Decision theory was Amos’s field, and so Amos did all the talking. The audience contained at least three current and future Nobel Prize winners in economics: Peter Diamond, Daniel McFadden, and Kenneth Arrow. “When you listened to Amos, you knew you were talking to a first-rate mind,” said Arrow. “You raise a question. He’s thought of the question already, and he has an answer.”

After he listened to Amos’s presentation, Arrow had one big question for Amos: What is a loss?

The theory obviously turned on the stark difference in people’s feelings when they faced potential losses rather than potential gains. A loss, according to the theory, was when a person wound up worse off than his “reference point.” But what was this reference point? The easy answer was: wherever you started from. Your status quo. A loss was just when you ended up worse than your status quo. But how did you determine any person’s status quo? “In the experiments it’s pretty clear what a loss is,” Arrow said later. “In the real world it’s not so clear.”

Wall Street trading desks at the end of each year offer a flavor of the problem. If a Wall Street trader expects to be paid a bonus of one million dollars and he’s given only half a million, he feels himself to be, and behaves as if he is, in the domain of losses. His reference point is an expectation of what he would receive. That expectation isn’t a stable number; it can be changed in all sorts of ways. A trader who expects to be given a million-dollar bonus, and who further expects everyone else on his trading desk to be given million-dollar bonuses, will not maintain the same reference point if he learns that everyone else just received two million dollars. If he is then paid a million dollars, he is back in the domain of losses. Danny would later use the same point to explain the behavior of apes in experiments researchers had conducted on bonobos. “If both my neighbor in the next cage and I get a cucumber for doing a great job, that’s great. But if he gets a banana and I get a cucumber, I will throw the cucumber at the experimenter’s face.” The moment one ape got a banana, it became the ape next door’s reference point.

The reference point was a state of mind. Even in straight gambles you could shift a person’s reference point and make a loss seem like a gain, and vice versa. In so doing, you could manipulate the choices people made, simply by the way they were described. They gave the economists a demonstration of the point:

Problem A. In addition to whatever you own, you have been given $1,000. You are now required to choose between the following options:

Option 1. A 50 percent chance to win $1000

Option 2. A gift of $500

Most everyone picked option 2, the sure thing.

Problem B. In addition to whatever you own, you have been given $2,000. You are now required to choose between the following options:

Option 3. A 50 percent chance to lose $1,000

Option 4. A sure loss of $500

Most everyone picked option 3, the gamble.

The two questions were effectively identical. In both cases, if you picked the gamble, you wound up with a 50-50 shot at being worth $2,000. In both cases, if you picked the sure thing, you wound up being worth $1,500. But when you framed the sure thing as a loss, people chose the gamble. When you framed it as a gain, people picked the sure thing. The reference point—the point that enabled you to distinguish between a gain and a loss— wasn’t some fixed number. It was a psychological state. “What constitutes a gain or a loss depends on the representation of the problem and on the context in which it arises,” the first draft of “Value Theory” rather loosely explained. “We propose that the present theory applies to the gains and losses as perceived by the subject.”

Danny and Amos were trying to show that people faced with a risky choice failed to put it in context. They evaluated it in isolation. In exploring what they now called the isolation effect, Amos and Danny had stumbled upon another idea—and its real-world implications were difficult to ignore. This one they called “framing.” Simply by changing the description of a situation, and making a gain seem like a loss, you could cause people to completely flip their attitude toward risk, and turn them from risk avoiding to risk seeking. “We invented framing without realizing we were inventing framing,” said Danny. “You take two things that should be identical—the way they differ should be irrelevant—and by showing it isn’t irrelevant, you show that expected utility theory is wrong.” Framing, to Danny, felt like their work on judgment. Here, look, yet another strange trick the mind played on itself.

Framing was just another phenomenon: There was never going to be a theory of framing. But Amos and Danny would eventually spend all kinds of time and energy dreaming up examples of the phenomenon, to illustrate how it might distort real-world decisions. The most famous was the Asian Disease Problem.

Michael Lewis's books