Dayton, near where she lived In an email sent in reply to a fact-checking inquiry, Crandall wrote: “In 1986, I began working with Dr. Gary Klein at his company Klein Associates Inc. The work you mention with firefighters and military commanders had already begun when I joined the company. It continued for many years, expanding well beyond firefighting and military command and control, and was carried out by Gary and the Klein Associates research team (who were an amazing bunch of very smart talented quirky people). I had both research and management positions at Klein Associates, and I was involved in some of those studies, not in others. As owner and Chief Scientist, Gary led our efforts to describe how (some) people are able to ‘keep their heads in chaotic environments’ and particularly how (some) people are able to make effective decisions under conditions of stress, risk, and time pressure….It is correct that in the interviews we conduct, when asked about decision making and how a person knew to do X in a particular situation, they often respond with, ‘experience’ or ‘gut feel’ or ‘intuition’ or ‘I just knew.’…These accounts of an intuitive basis for decision making became a cornerstone of our research efforts….The studies we did in the NICU confirmed what we were finding in other work domains—highly experienced, highly skilled personnel become very good at paying attention to what’s most important (the critical cues) in a given situation, and not getting distracted by less important information….Over time and repeated experience with similar situations, they learn what matters and what doesn’t. They learn to size up a situation very quickly and accurately. They see connections across various cues (clusters; packages; linkages) that form a meaningful pattern. Some people refer to this as a gestalt, and others as ‘mental models’ or schemas.” For more details, please see Beth Crandall and Karen Getchell-Reiter, “Critical Decision Method: A Technique for Eliciting Concrete Assessment Indicators from the Intuition of NICU Nurses,” Advances in Nursing Science 16, no. 1 (1993): 42–51; B. Crandall and R. Calderwood, “Clinical Assessment Skills of Experienced Neonatal Intensive Care Nurses,” Contract 1 (1989): R43; B. Crandall and V. Gamblian, “Guide to Early Sepsis Assessment in the NICU,” Instruction Manual Prepared for the Ohio Department of Development Under the Ohio SBIR Bridge Grant Program (Fairborn, Ohio: Klein Associates, 1991).
“a whole picture” In an email sent in reply to a fact-checking inquiry, Crandall wrote: “The other nurse was a preceptee—in training to provide nursing care in a NICU. Darlene was her preceptor—helping her learn and providing oversight and guidance as she learns how to care for premature babies. So, the baby WAS Darlene’s responsibility in the sense that she was supervising/precepting the nurse caring for the baby. You are correct, she noticed that the baby didn’t look ‘good.’ Here is the incident account that we wrote up based on our interview notes: ‘When this incident took place, I was teaching, serving as a preceptor for a new nurse. We had been working together for quite awhile and she was nearing the end of her orientation, so she was really doing primary care and I was in more of a supervisory position. Anyway, we were nearing the end of a shift and I walked by this particular isolette and the baby really caught my eye. The baby’s color was off and its skin was mottled. Its belly looked slightly rounded. I looked at the chart and it indicated the baby’s temp was unstable. I also noticed that the baby had had a heel stick for lab work several minutes ago and the stick was still bleeding. When I asked my orientee how she thought the baby was doing, she said that he seemed kind of sleepy to her. I went and got the Doctor immediately and told him we were “in big trouble” with this baby. I said the baby’s temp was unstable, that its color was funny, it seemed lethargic and it was bleeding from a heel stick. He reacted right away, put the baby on antibiotics and ordered cultures done. I was upset with the orientee that she had missed these cues, or that she had noticed them but not put them together. When we talked about it later I asked about the baby’s temp dropping over four readings. She had noticed it, but had responded by increasing the heat in the isolette. She had responded to the ‘surface’ problem, instead of trying to figure out what might be causing the problem.”
“creating mental models” Thomas D. LaToza, Gina Venolia, and Robert DeLine, “Maintaining Mental Models: A Study of Developer Work Habits,” Proceedings of the 28th International Conference on Software Engineering (New York: ACM, 2006); Philip Nicholas Johnson-Laird, “Mental Models and Cognitive Change,” Journal of Cognitive Psychology 25, no. 2 (2013): 131–38; Philip Nicholas Johnson-Laird, How We Reason (Oxford: Oxford University Press, 2006); Philip Nicholas Johnson-Laird, Mental Models, Cognitive Science Series, no. 6 (Cambridge, Mass.: Harvard University Press, 1983); Earl K. Miller and Jonathan D. Cohen, “An Integrative Theory of Prefrontal Cortex Function,” Annual Review of Neuroscience 24, no. 1 (2001): 167–202; J. D. Sterman and D. V. Ford, “Expert Knowledge Elicitation to Improve Mental and Formal Models,” Systems Approach to Learning and Education into the 21st Century, vol. 1, 15th International System Dynamics Conference, August 19–22, 1997, Istanbul, Turkey; Pierre Barrouillet, Nelly Grosset, and Jean-Fran?ois Lecas, “Conditional Reasoning by Mental Models: Chronometric and Developmental Evidence,” Cognition 75, no. 3 (2000): 237–66; R. M. J. Byrne, The Rational Imagination: How People Create Alternatives to Reality (Cambridge, Mass.: MIT Press, 2005); P. C. Cheng and K. J. Holyoak, “Pragmatic Reasoning Schemas,” in Reasoning: Studies of Human Inference and Its Foundations, eds. J. E. Adler and L. J. Rips (Cambridge: Cambridge University Press, 2008), 827–42; David P. O’Brien, “Human Reasoning Includes a Mental Logic,” Behavioral and Brain Sciences 32, no. 1 (2009): 96–97; Niki Verschueren, Walter Schaeken, and Gery d’Ydewalle, “Everyday Conditional Reasoning: A Working Memory–Dependent Tradeoff Between Counterexample and Likelihood Use,” Memory and Cognition 33, no. 1 (2005): 107–19.
the child’s bassinet In response to a fact-checking email, Crandall wrote: “The key to this story (for me anyway) is that experts see meaningful patterns that novices miss altogether. As an experienced NICU nurse, Darlene has seen hundreds of babies. She is not reflecting on all of them…they have merged into a sense of what is typical for a premie baby at X weeks. She has also seen many babies with sepsis (it happens a lot in NICUs, for a variety of reasons unrelated to quality of care). The combination of cues (bloody bandaid, falling temp, distended belly, sleepiness/lethargy) brought with it the recognition ‘this baby is in trouble’ and ‘probably septic.’ At least, that’s what she told us in the interview….I agree that people often create narratives to help explain what’s going on around them, and help them make sense—particularly when they are having trouble figuring something out. In this incident, Darlene was not having trouble figuring out what was going on—she recognized immediately what was going on….I think of Darlene’s story as being about expertise, and the difference between how experts and novices view and understand a given situation….Storytelling takes time, and stories are linear (this happened, then this, and then that). When experienced people describe events such as this one, what happens is very fast: They ‘read’ the situation, they understand what’s going on, and they know what to do.”
“It’s even harder now” In response to a fact-checking email, Casner expanded his comments: “I wouldn’t say that pilots are ‘passive’ but that they find it exceedingly difficult to maintain their attention on an automated system that works so reliably well. Humans are not good at sitting and staring….Humans have limited attentional resources (e.g., how our kids do stuff behind our backs and get away with it). So we have to keep our attention pointed in the direction that we think is most important at all times. If a cockpit computer in front of me has worked impeccably for 100 hours in a row, it’s hard to envision that as being the most important thing to think about. For example, my kid could be getting away with some insane stuff at that very moment. In our study of mind wandering among pilots [Thoughts in Flight: Automation Use and Pilots’ Task-Related and Task-Unrelated Thought], we found that the pilot flying was thinking ‘task-unrelated thoughts’ about 30% of the time. The other pilot, the monitoring pilot, was mind wandering about 50% of the time. Why wouldn’t they? If you don’t give me something important or pressing to think about, I’ll come up with something myself.”