Smarter Faster Better: The Secrets of Being Productive in Life and Business

For two years, the GJP conducted training sessions, watched people make predictions, and collected data. They tracked who got better, and how performance changed as people were exposed to different types of tutorials. Eventually, the GJP published their findings: Giving participants even brief training sessions in research and statistical techniques—teaching them various ways of thinking about the future—boosted the accuracy of their predictions. And most surprising, a particular kind of lesson—training in how to think probabilistically—significantly increased people’s abilities to forecast the future.

The lessons on probabilistic thinking offered by the GJP had instructed participants to think of the future not as what’s going to happen, but rather as a series of possibilities that might occur. It taught them to envision tomorrow as an array of potential outcomes, all of which had different odds of coming true. “Most people are sloppy when they think about the future,” said Lyle Ungar, a professor of computer science at the University of Pennsylvania who helped oversee the GJP. “They say things like, ‘It’s likely we’ll go to Hawaii for vacation this year.’ Well, does that mean that it’s 51 percent certain? Or 90 percent? Because that’s a big difference if you’re buying nonrefundable tickets.” The goal of the GJP’s probabilistic training was to show people how to turn their intuitions into statistical estimates.

One exercise, for instance, asked participants to analyze if the French president Nicolas Sarkozy would win reelection in an upcoming vote.

The training indicated that, at a minimum, there were three variables someone should consider in predicting Sarkozy’s reelection chances. The first was incumbency. Data from previous French elections indicated that an incumbent such as President Sarkozy, on average, can expect to receive 67 percent of the vote. Based on that, someone might forecast that Sarkozy is 67 percent likely to remain in office.

But there were other variables to take into account, as well. Sarkozy had fallen into disfavor among French voters, and pollsters had estimated that, based on low approval ratings, Sarkozy’s reelection chances were actually 25 percent. Under that logic, there was a three-quarters chance he would be voted out. It was also worth considering that the French economy was limping along, and economists guessed that, based on economic performance, Sarkozy would garner only 45 percent of the vote.

So there were three potential futures to consider: Sarkozy could earn 67 percent, 25 percent, or 45 percent of the votes cast. In one scenario, he would win easily, in another he would lose by a wide margin, and the third scenario was a relatively close call. How do you combine those contradictory outcomes into one prediction? “You simply average your estimates based on incumbency, approval ratings, and economic growth rates,” the training explained. “If you have no basis for treating one variable as more important than another, use equal weighting. This approach leads you to predict [(67% + 25% + 45%)/3] = approximately a 46% chance of reelection.”





Nine months later, Sarkozy received 48.4 percent of the vote and was replaced by Fran?ois Hollande.

This is the most basic kind of probabilistic thinking, a simplistic example that teaches an underlying idea: Contradictory futures can be combined into a single prediction. As this kind of logic gets more sophisticated, experts usually begin speaking about various outcomes as probability curves—graphs that show the distribution of potential futures. For instance, if someone was asked to guess how many seats Sarkozy’s party was going to win in the French parliament, an expert might describe the possible outcomes as a curve that shows how the likelihood of winning parliamentary seats is linked to Sarkozy’s odds of remaining president:





In fact, when Sarkozy lost the election, his party, the Union pour un Mouvement Populaire, or UMP, also suffered at the polls, claiming only 194 seats, a significant decline.

The GJP’s training modules instructed people in various methods for combining odds and comparing futures. Throughout, a central idea was repeated again and again. The future isn’t one thing. Rather, it is a multitude of possibilities that often contradict one another until one of them comes true. And those futures can be combined in order for someone to predict which one is more likely to occur.

This is probabilistic thinking. It is the ability to hold multiple, conflicting outcomes in your mind and estimate their relative likelihoods. “We’re not accustomed to thinking about multiple futures,” said Barbara Mellers, another GJP leader. “We only live in one reality, and so when we force ourselves to think about the future as numerous possibilities, it can be unsettling for some people because it forces us to think about things we hope won’t come true.”





Simply exposing participants to probabilistic training was associated with as much as a 50 percent increase in the accuracy of their predictions, the GJP researchers wrote. “Teams with training that engaged in probabilistic thinking performed best,” an outside observer noted. “Participants were taught to turn hunches into probabilities. Then they had online discussions with members of their team [about] adjusting the probabilities, as often as every day….Having grand theories about, say, the nature of modern China was not useful. Being able to look at a narrow question from many vantage points and quickly readjust the probabilities was tremendously useful.”

Learning to think probabilistically requires us to question our assumptions and live with uncertainty. To become better at predicting the future—at making good decisions—we need to know the difference between what we hope will happen and what is more and less likely to occur.

“It’s great to be 100 percent certain you love your girlfriend right now, but if you’re thinking of proposing to her, wouldn’t you rather know the odds of staying married over the next three decades?” said Don Moore, a professor at UC-Berkeley’s Haas School of Business who helped run the GJP. “I can’t tell you precisely whether you’ll be attracted to each other in thirty years. But I can generate some probabilities about the odds of staying attracted to each other, and probabilities about how your goals will coincide, and statistics on how having children might change the relationship, and then you can adjust those likelihoods based on your experiences and what you think is more or less likely to occur, and that’s going to help you predict the future a little bit better.

“In the long run, that’s pretty valuable, because even though you know with 100 percent certainty that you love her right now, thinking probabilistically about the future can force you to think through things that might be fuzzy today, but are really important over time. It forces you to be honest with yourself, even if part of that honesty is admitting there are things you aren’t sure about.”



When Annie started playing poker seriously, it was her brother who sat her down and explained what separated the winners from everyone else. Losers, Howard said, are always looking for certainty at the table. Winners are comfortable admitting to themselves what they don’t know. In fact, knowing what you don’t know is a huge advantage—something that can be used against other players. When Annie would call Howard and complain that she had lost, had suffered bad luck, that the cards had gone against her, he would tell her to stop whining.

“Have you considered that you might be the idiot at the table who’s looking for certainty?” he asked.

In Texas Hold’Em, the kind of poker Annie was playing, each player received two private cards, and then five communal cards were dealt, faceup, onto the middle of the table to be shared by everyone. The winner was whoever had the best combination of private and communal cards.

Charles Duhigg's books