And, really, who would switch? Like the other rules of rationality, the independence axiom seemed reasonable, and not obviously contradicted by the way human beings generally behaved.
Expected utility theory was just a theory. It didn’t pretend to be able to explain or predict everything people did when they faced some risky decision. Danny gleaned its importance not from reading Amos’s description of it in the undergraduate textbook but only from the way Amos spoke of it. “This was a sacred thing for Amos,” said Danny. Although the theory made no great claim to psychological truth, the textbook Amos had coauthored made it clear that it had been accepted as psychologically true. Pretty much everyone interested in such things, a group that included the entire economics profession, seemed to take it as a fair description of how ordinary people faced with risky alternatives actually went about making choices. That leap of faith had at least one obvious implication for the sort of advice economists gave to political leaders: It tilted everything in the direction of giving people the freedom to choose and leaving markets alone. After all, if people could be counted on to be basically rational, markets could, too.
Amos had clearly wondered about that, even as a Michigan graduate student. Amos had always had an almost jungle instinct for the vulnerability of other people’s ideas. He of course knew that people made decisions that the theory would not have predicted. Amos himself had explored how people could be—as the theory assumed they were not—reliably “intransitive.” As a graduate student in Michigan, he had induced both Harvard undergraduates and convicted murderers in Michigan prisons, over and over again, to choose gamble A over gamble B, then choose gamble B over gamble C—and then turn around and choose C instead of A. That violated a rule of expected utility theory. And yet Amos had never followed his doubts very far. He saw that people sometimes made mistakes; he did not see anything systematically irrational in the way they made decisions. He hadn’t figured out how to bring deep insights about human nature into the mathematical study of human decision making.
By the summer of 1973, Amos was searching for ways to undo the reigning theory of decision making, just as he and Danny had undone the idea that human judgment followed the precepts of statistical theory. On a trip to Europe with his friend Paul Slovic, he shared his latest thoughts about how to make room, in the world of decision theory, for a messier view of human nature. “Amos warns against pitting utility theory vs. an alternative model in a direct, head to head, empirical test,” Slovic relayed, in a letter to a colleague, in September 1973. “The problem is that utility theory is so general that it is hard to refute. Our strategy should be to take the offensive in building a case, not against utility theory, but for an alternative conception that brings man’s limitations in as a constraint.”
Amos had at his disposal a connoisseur of man’s limitations. He now described Danny as “the world’s greatest living psychologist.” Not that he ever said anything so flattering to Danny directly. (“Manly reticence was the rule,” said Danny.) He never fully explained to Danny why he thought to invite him into decision theory—a technical and antiseptic field Danny cared little about and knew less of. But it is hard to believe that Amos was simply looking around for something else they might do together. It’s easier to believe that Amos suspected what might happen after he gave Danny his textbook on the subject. That moment has the feel of an old episode of The Three Stooges, when Larry plays “Pop Goes the Weasel” and triggers Curly into a frenzy of destruction.
Danny read Amos’s textbook the way he might have read a recipe written in Martian. He decoded it. He had long ago realized that he wasn’t a natural applied mathematician, but he could follow the logic of the equations. He knew that he was meant to respect, even revere, them. Amos was a member of high standing in the society of mathematical psychologists. That society in turn looked down upon much of the rest of psychology. “It is a given that people who use mathematics have some glamour,” said Danny. “It was prestigious because it borrowed the aura of mathematics and because nobody else could understand what was going on there.” Danny couldn’t escape the growing prestige of math in the social sciences: His remove counted against him. But he didn’t really admire decision theory, or care about it. He cared why people behaved as they did. And to Danny’s way of thinking, the major theory of decision making did not begin to describe how people made decisions.
It must have come as something of a relief to him, as he neared the end of Amos’s chapter on expected utility theory, to arrive at the following sentence: “Some people, however, remained unconvinced by the axioms.”
One such person, the textbook went on to say, was Maurice Allais. Allais was a French economist who disliked the self-certainty of American economists. He especially disapproved of the growing tendency in economics, after von Neumann and Morgenstern built their theory, to treat a math model of human behavior as an accurate description of how people made choices. At a convention of economists in 1953, Allais offered what he imagined to be a killer argument against expected utility theory. He asked his audience to imagine their choices in the following two situations (the dollar amounts used by Allais are here multiplied by ten to account for inflation and capture the feel of his original problem):
Situation 1. You must choose between having:
1) $5 million for sure
or this gamble
2) An 89 percent chance of winning $5 million A 10 percent chance of winning $25 million A 1 percent chance to win zero
Most people who looked at that, apparently including many of the American economists in Allais’s audience, said, “Obviously, I’ll take door number 1, the $5 million for sure.” They preferred the certainty of being rich to the slim possibility of being even richer. To which Allais replied, “Okay, now consider this second situation.”
Situation 2. You must choose between having: 3) An 11 percent chance of winning $5 million, with an 89 percent chance to win zero
or
4) A 10 percent chance of winning $25 million, with a 90 percent chance to win zero
Most everyone, including American economists, looked at this choice and said, “I’ll take number 4.” They preferred the slightly lower chance of winning a lot more money. There was nothing wrong with this; on the face of it, both choices felt perfectly sensible. The trouble, as Amos’s textbook explained, was that “this seemingly innocent pair of preferences is incompatible with utility theory.” What was now called the Allais paradox had become the most famous contradiction of expected utility theory. Allais’s problem caused even the most cold-blooded American economist to violate the rules of rationality.*