Fludd explained that coming up with such theories was common on her team. Employees would gather during lunch breaks or after work to kick around ideas. Typically, these ideas didn’t make much sense—at least, at first. In fact, the ideas were often somewhat nonsensical, such as the suggestion that an irresponsible young person who is already behind on her debts, for some reason, would suddenly care deeply about improving her credit score. But that was okay. The point wasn’t to suggest a good idea. It was to generate an idea, any idea at all, and then test it.
Fludd looked at her calendar. “So the next day, we started calling people between the ages of twenty-one and thirty-seven.” At the end of the shift, employees reported no noticeable change in how much they had convinced people to pay. So the following morning, Fludd changed one variable: She told her employees to call people between the ages of twenty-six and thirty-one. The collection rate improved slightly. The next day, they called a subset of that group, cardholders between twenty-six and thirty-one with balances between $3,000 and $6,000. Collection rates declined. The next day: Cardholders with balances between $5,000 and $8,000. That led to the highest collection rates of the week. In the evenings, before everyone left, managers gathered to review the day’s results and speculate on why certain efforts had succeeded or failed. They printed out logs and circled which calls had gone particularly well. That was Fludd’s “calendar”: the printouts from each day with annotations and employees’ comments as well as notes suggesting why certain tactics had worked so well.
With further testing, Fludd determined that her original theory regarding young people was a dud. That, in itself, wasn’t surprising. Most of the theories were duds initially. Employees had all kinds of hunches that didn’t bear up under testing. But as each experiment unfolded, workers became increasingly sensitive to patterns they hadn’t noticed before. They listened more closely. They tracked how debtors would respond to various questions. And eventually, a valuable insight would emerge—like, say, it’s better to call people’s homes between 9:15 and 11:50 in the morning because the wife will pick up and women are more likely to pay a family’s debts. Sometimes, the debt collectors would develop instincts they couldn’t exactly put into words but learned to heed nonetheless.
Then someone would propose a new theory or experiment and the process would start all over again. “When you track every call and keep notes and talk about what just happened with the person in the next cubicle, you start paying attention differently,” Fludd told me. “You learn to pick up on things.”
To the consultants, this was an example of someone using the scientific method to isolate and test variables. “Charlotte’s peers would generally change multiple things at once,” wrote Niko Cantor, one of the consultants, in a review of his findings. “Charlotte would only change one thing at a time. Therefore she understood the causality better.”
But something else was going on, as well. It wasn’t just that Fludd was isolating variables. Rather, by coming up with hypotheses and testing them, Fludd’s team was heightening their sensitivity to the information flowing past. In a sense, they were adding an element of disfluency to their work, performing operations on the “data” generated during each conversation until lessons were easier to absorb. The spreadsheets and memos that they received each morning, the data that appeared on their screens, the noises they heard in the background of phone calls—that became material for coming up with new theories and running various experiments. Each phone call contained tons of information that most collectors never registered. But Fludd’s employees noticed it, because they were looking for clues to prove or disprove theories. They were interacting with the data embodied in each conversation, turning it into something they could use.
This is how learning occurs. Information gets absorbed almost without our noticing because we’re so engrossed with it. Fludd took the torrent of data arriving each day and gave her team a method for placing it into folders that made it easier to understand. She helped her employees do something with all those memos they received and the conversations they were having—and, as a result, it was easier for them to learn.
III.
Nancy Johnson became a teacher in Cincinnati because she didn’t know what else to do with her life. It had taken her seven years to make it through college, and after graduating, she’d become a flight attendant, married a pilot, and then decided to settle down. In 1996, she started substituting in Cincinnati’s public schools, hoping it would lead to a full-time job. She went from classroom to classroom, guiding classes on everything from English to biology, until she finally got a permanent offer as a fourth-grade teacher. On her first day, the principal saw her and said, “So you’re Ms. Johnson.” He later admitted he had gotten a number of applications with the same last name and wasn’t fully certain which one he had hired.
A few years later, in response to the federal government’s No Child Left Behind law, Cincinnati began tracking students’ performances in reading and math via standardized exams. Johnson was soon drowning in reports. Each week, she received memos on students’ attendance and their progress in vocabulary, math proficiency, reading, writing, literature comprehension, and something called “cognitive manipulation,” as well as reviews of her classroom’s proficiency, her teaching aptitude, and the school’s overall scores. There was so much information that the city had hired a team of data visualization experts to design the weekly memos the district delivered via the Internet dashboards. The graphics team was talented. The charts Johnson received were easy to read, and the Internet sites contained clear summaries and color-coded trend lines.
But in those first few years, Johnson hardly looked at any of it. She was supposed to use all that information in designing her curricula, but it made her head hurt. “There were lots of memos and statistics, and I knew I was supposed to be incorporating them into my classroom, but it all just kind of washed over me,” she said. “It felt like there was this gap between all those numbers and what I needed to know to become a better teacher.”
Her fourth-grade kids were mostly poor, and many were from single-parent families. She was a good teacher, but her class still fared badly on assessment exams. In 2007, the year before Cincinnati’s Elementary Initiative began, her students scored an average of 38 percent proficiency on the state’s reading test.
Then, in 2008, the Elementary Initiative was launched. As part of that reform, Johnson’s principal mandated that all teachers had to spend at least two afternoons a month in the school’s new data room. Around a conference table, teachers were forced to participate in exercises that made data collection and statistical tabulation even more time consuming. At the start of the semester, Johnson and her colleagues were told that as part of the EI, they had to create an index card for every student in their class. Then, every other Wednesday, Johnson would go into the data room and transcribe the past two week’s test scores onto each student’s card, and then group all the cards into color-coded piles—red, yellow, or green—based on whether students were underperforming, meeting expectations, or exceeding their peers. As the semester progressed, she also began grouping cards based on who was improving or falling behind over time.