The authors offered a possible explanation for this trend. Being less experienced, 1st-year residents were less confident in their choices and therefore employed more analytic reason even in the rushed Phase 2. That’s all fine and well, except that it wouldn’t explain why they made more mistakes with the non-similar cases. I’d like to take a shot at this and suggest that first-year residents, being closer to medical school than the 2nd-years, unintentionally employed a very different bias. What I would call “standardized test-taking bias”‡ in which students get uncomfortable when the same answer comes up twice in an exam and therefore search to find an alternative solution. The thinking goes something like, “Wait, I already answered acute viral hepatitis. It can’t be acute viral hepatitis twice…” While useless in a clinical setting, where diseases appear in whatever order and volume they happen to occur in, this may have actually helped the1st-year residents by alerting them to the similar cases and making them try again.
Phase 3 reflection did improve the scores of both first and 2nd-year residents. Although, amusingly, the 1st-years still bested the 2nd-years even after the benefit of Phase 3 reflection. Residencies can be exhausting, or so I’ve heard. Perhaps they were just a bit mentally burned out. Given the opposite trends observed in the accuracy of the two sets of residents in Phase 2, it would be helpful to see what would have happened if Phase 3 analytical reasoning had been applied to all the Phase 2 cases, and not just the similar ones selected to evoke the availability bias. There may be a broader take-home lesson trying to creep through the data - something along the lines of “thinking is useful” or “sleep is nice”.
While the authors acknowledge that their results should not be generalized to higher years of residency or to more experienced doctors without further experimentation, it doesn’t bode well for one’s chance of getting a correct diagnosis during the next trip to the emergency room (a environment that is hardly conducive to careful reflection). And greater experience can also make physicians more prone to “anchoring effect” – the tendency to stick to whatever conclusion they reached for first. In fact, cognitive biases abound. There are far more ways to be wrong than you would ever dream of. Your best bet is to just try not to get sick.
*A short and not especially exhaustive list of all the possible ailments that might be causing the patient’s symptoms.
† While the participants didn’t interact with real patients, it is worth noting that the lists of symptoms and test results presented were based on actual clinical cases, rather than Platonic ideals of how these ailments would manifest.
‡ The actual cognitive bias that most closely resembles this would perhaps be Gambler’s Fallacy, in which people erroneously believe that the next round of a coin toss (or some other chance-determined game) is influenced by the outcomes of previous rounds. Getting heads 5 times in a row does not increase the odds of getting tails on the next try. So long as the coin still has two sides, the probability remains 1:1. And unless you are given a list of fill-in-the-blank answers and specifically instructed to only use each once, there is no reason to infer that an answer couldn’t appear twice on a test.