Friday, September 24, 2010

Practice Makes Imperfect

I once dated a man who was in the process of becoming a doctor. His apartment was cluttered with study guides for his chosen profession, which I often perused for amusement. One of the most game-worthy of the books was aimed at preparing for diagnostic exams and featured hypothetical cases and their symptoms. For laughs, boyfriend would read the descriptions of the imaginary patients to me and I would try to guess what was wrong with them, largely based on what I’d “learned” from episodes of ER. He didn’t need to read far, “24 year old female with abdominal pain…” before I would shout out my uneducated guesses without waiting to hear the rest of the case, “Ectopic pregnancy!” I was right with alarming frequency. He said that I would make a good doctor, that I had a knack for finding the most likely option in the “differential diagnosis”.* Had I gone to medical school, his prediction would likely have been proven wrong. My conclusion-jumping approach would have made me a less-than-stellar physician, but apparently there are plenty more like me out there.

The Journal of the American Medical Association had a theme issue recently on medical education and one of the articles addressed the problem of diagnostic errors that result not from lack of knowledge but, ironically, from a greater amount previous experience. It focused on the availability bias – a cognitive error that leads someone to perceive the likelihood of a thing based on how readily it is brought to mind (rather than actual frequency of occurrence). For instance, restless leg syndrome isn’t especially common but for while there it seemed that half of the nation thought they were afflicted with it. It had been well advertised due to a recently introduced medication that allegedly treated the problem.

The study presented involved first and second-year medical residents in the Netherlands. In a laboratory setting (which is to say, they were reading cases rather than examining humans) participants completed 3 phases of simulated malady-diagnosing. In Phase 1 each case with which they were presented also came with a possible diagnosis. The participants were asked only to rate the likelihood (as percentage) of the diagnosis being correct. They were given no feedback on how they fared with each case. In Phase 2 the residents were presented with new cases, half of which bore similarities to some of the cases they had seen in Phase 1, but were in fact different ailments. For example, chest pain is a symptom common to both heart attacks and viral inflammation of cardiac tissue. This time no diagnosis was offered and the residents were asked to produce their own diagnosis (and to hurry up about it). The authors predicted that the similar cases would bring to mind the diagnoses encounter in Phase 1 and thus, thanks to availability bias, be more frequently misdiagnosed than the non-similar ones. Again no feedback was provided.

Phase 3 examined whether the use of analytical reasoning would undo the potential inaccuracies brought on by the availability bias. Participants were asked to revisit their hasty Phase 2 decisions, on the similar cases only, and given a step-by-step procedure that would encourage them to actually think about what they were doing.

So what happened? Well, as predicted, the 2nd-year residents botched more of the similar cases in Phase 2 than the non-similar ones. The 2nd-year residents were expected to be especially susceptible to the availability bias due to their greater clinical experience, which reportedly increases this cognitive problem. A bit more surprising was the observation that 1st-year residents actually fared better with the similar cases than with the non-similar cases.

The authors offered a possible explanation for this trend. Being less experienced, 1st-year residents were less confident in their choices and therefore employed more analytic reason even in the rushed Phase 2. That’s all fine and well, except that it wouldn’t explain why they made more mistakes with the non-similar cases. I’d like to take a shot at this and suggest that first-year residents, being closer to medical school than the 2nd-years, unintentionally employed a very different bias. What I would call “standardized test-taking bias”‡ in which students get uncomfortable when the same answer comes up twice in an exam and therefore search to find an alternative solution. The thinking goes something like, “Wait, I already answered acute viral hepatitis. It can’t be acute viral hepatitis twice…” While useless in a clinical setting, where diseases appear in whatever order and volume they happen to occur in, this may have actually helped the1st-year residents by alerting them to the similar cases and making them try again.

Phase 3 reflection did improve the scores of both first and 2nd-year residents. Although, amusingly, the 1st-years still bested the 2nd-years even after the benefit of Phase 3 reflection. Residencies can be exhausting, or so I’ve heard. Perhaps they were just a bit mentally burned out. Given the opposite trends observed in the accuracy of the two sets of residents in Phase 2, it would be helpful to see what would have happened if Phase 3 analytical reasoning had been applied to all the Phase 2 cases, and not just the similar ones selected to evoke the availability bias. There may be a broader take-home lesson trying to creep through the data - something along the lines of “thinking is useful” or “sleep is nice”.

While the authors acknowledge that their results should not be generalized to higher years of residency or to more experienced doctors without further experimentation, it doesn’t bode well for one’s chance of getting a correct diagnosis during the next trip to the emergency room (a environment that is hardly conducive to careful reflection). And greater experience can also make physicians more prone to “anchoring effect” – the tendency to stick to whatever conclusion they reached for first. In fact, cognitive biases abound. There are far more ways to be wrong than you would ever dream of. Your best bet is to just try not to get sick.


*A short and not especially exhaustive list of all the possible ailments that might be causing the patient’s symptoms.

† While the participants didn’t interact with real patients, it is worth noting that the lists of symptoms and test results presented were based on actual clinical cases, rather than Platonic ideals of how these ailments would manifest.

‡ The actual cognitive bias that most closely resembles this would perhaps be Gambler’s Fallacy, in which people erroneously believe that the next round of a coin toss (or some other chance-determined game) is influenced by the outcomes of previous rounds. Getting heads 5 times in a row does not increase the odds of getting tails on the next try. So long as the coin still has two sides, the probability remains 1:1. And unless you are given a list of fill-in-the-blank answers and specifically instructed to only use each once, there is no reason to infer that an answer couldn’t appear twice on a test.

3 comments:

  1. Fascinating post, as usual. It reminded me of another case of cognitive bias in medicine, first brought to national attention by Dr. C. Henry Kempe. Dr. Kempe grew increasingly frustrated and alarmed that neither his brightest med students nor seasoned colleagues seemed capable of correctly diagnosing cases of child abuse, regardless of how blatant or overwhelming the evidence. Apparently, they simply would not or could not accommodate the occurrence of child abuse, let alone its prevalence, within their world view.

    The good part is that it pissed him off so much that he blew the whistle in a 1962 article in the Journal of the American Medical Association, which caused wide-spread public and professional uproar (he was eventually nominated for a Nobel Prize for his continued advocacy). The bad part is that the availability bias soon kicked in, and if parents in the early ’60s brought their kid to the doctor for injuries sustained from “falling off a swing”, they’d be grilled like a cheeseburger, even if their kid really DID fall off a swing.

    ReplyDelete
  2. Availability bias is of course a member of a large cognitive bias family. There are plenty of those around; we all susceptible to them. This fact is well exploited by such institutions as politics and religion. I dare say religion would be impossible without them. And current political scene is just oozing them. On a happier note: there is this sweet little number on youtube – Cognitive Bias VideoSong
    http://www.youtube.com/watch?v=3RsbmjNLQkc&NR=1

    ReplyDelete
  3. A moving piece of singer-songwriterly artistry. I hope that when/if he performs this number live, he freestyles an additional verse with biases shouted out from the audience. I'm rather fond of the "well-traveled road effect" (underestimation of the duration of an oft-taken route, vs. a less familiar one) which I suspect accounts for at least half of my punctuality failures.

    ReplyDelete