Feb 12 2013

Gorilla in the Bronchi

Take a look at the picture just below the fold. Pretend you are a radiologist and your job is to find anything strange or abnormal on the scan. You are specifically looking for signs of cancer, but you need to find anything abnormal.

Done? OK – did you see the gorilla in the right upper corner of the scan? If you didn’t, don’t feel bad. Neither did 83% of radiologists studied, according to Trafton Drew who ran the study (which has not yet been published).

Readers of this and other skeptical blogs are likely familiar with this classic video of students tossing basketballs to each other  (if not, take and look before reading further). About half of the people viewing this video miss the obvious gorilla strolling across the screen. This is a phenomenon known as inattentional blindness – when our attention is focused on a specific task we are likely to miss information that is extraneous to that task, even if it is in our visual field and otherwise obvious.

As an aside, when doing some further background research for this post I came across this blog post on The Invisible Gorilla, by Daniel Simons, the researcher who did the famous basketball-gorilla video experiment. He writes about two studies that were discussed in Mary Roach’s book, Spook. They were probably the first studies of inattentional blindness, published in the Journal of the Society for Psychical Research in 1959. They were only inadvertently about inattentional blindness, however.

The researcher was interested in ghost sightings, and so he donned a sheet and walked down a college campus in one study, and across a movie stage in another (while a trailer was playing). In the first case no one reported seeing anything unusual, and in the second only about half the audience noticed anything. The author concluded that “real” reports of ghost sightings must be different, that they must contain some psi component. What he really documented, however, was the first experiment to demonstrate inattentional blindness. The ghost fared about as well as the gorilla – 50% noticed something.

This whole business is more than just a fun parlor trick, however. This research has implications for everyday life, such as driving. Not seeing a pedestrian or motorcycle in your path can be the same as missing the gorilla, but with dire consequences. In the current study, radiologists interpreting a CT scan is, of course, of great importance. This is particularly interesting because Radiologists are trained to see everything on a scan.

I can see how radiologists would be an interesting group to study in follow up to the basic gorilla experiments. Radiologists are very consciously looking at the entire scan for anything unusual, but at the same time are looking for specific known abnormalities, such as evidence of cancer (the specific task given the radiologists in this study). Therefore they may be engaging in a deliberate combination of task-specific attention to catch the known but also broad attention to catch the unexpected.

This study suggests that most radiologists are engaging in the former more than the latter. This leads to many further questions and potential follow up studies: In this study the radiologists were told to look for signs of cancer, and they probably believed that was the point of the study, so the results may be partly an artifact of study design. I would like to see the same study but on a group of naive radiologists who don’t know they are being studied. (There may be some ethical problems with this kind of study, including consent but also perhaps unintended consequences on the real medical readings the subject radiologists are doing.)

Researchers can also study if differing instructions will alter the outcome – being told to find cancer vs being told to find anything abnormal. I would also like to see if there is any correlation, positive or negative, with finding the gorilla and the medical accuracy of the radiologists. Are radiologists who see the gorilla better or worse at also finding cancer? At the very least this research should have implications for training radiologists, and perhaps even suggest protocols to minimize missing pathology.

This is yet another example of the importance of understanding how our brains work, and specifically how they process information. Being naive about normal cognitive processing leads to unrecognized potential for error, and also to erroneous conclusions.

In the ghost studies above, the researcher, like many sincere believers in the paranormal, could not understand how people would miss something as obvious as a person wearing a white sheet walking down the sidewalk. This lead to a paranormal interpretation of the results. Now we understand this as a neurological phenomenon. The same is true of many phenomena that are interpreted as paranormal but are really just neuropsychological.

Share

39 responses so far

39 Responses to “Gorilla in the Bronchi”

  1. nybgruson 12 Feb 2013 at 8:34 am

    This is indeed interesting Dr. Novella. I recall reading somewhere sometime ago (yes I know, very specific citation on my part) that there is only about a 70-80% inter-observer reliability between radiologists reading* and thus the suggestion that all studies be double read. Logistically that becomes problematic of course.

    By anecdote when I was on service last year, I was looking at my patient’s chest xray (lateral and PA). The patient had chest pain and obvious clinical evidence of pneumonia. The radiologist read the minor lobar pneumonia she had, but missed the obvious vertebral crush fractures she also had. I was uncertain of my own read so I asked my attending to verify and she agreed.

    When we send a patient for radiographic study, we always include a “why” which is both helpful because it helps the radiologist focus in on what is likely and better sift the wheat from the chaff of the read, but it also seems harmful if the radiologist focuses too much on what (s)he think the problem is. It may not even be the problem or there may be an incidental finding – like my patients’ crush fractures – that would be very useful to know.

    It seems like this is a case for increased awareness of the tricks our minds play on us and a directed effort to increase neuropsychological humility in radiology residency programs. Of course, the latter is probably useful for any training program of any kind, medical or otherwise.

    *as I recall this was not saying that 30% of radiologists got the reads “wrong” but that out of all the things that possibly could be seen in a radiograph, most radiologists missed a number of minor things, but the major things were more often seen and the overlap between the major and minor things seen was 70-80%

  2. amhovgaardon 12 Feb 2013 at 8:42 am

    I’d like to know what happens if they’re told to look for cancer, but the real problem is something else that would be obvious if they knew what to look for…

  3. Mr_Hunnicutton 12 Feb 2013 at 9:02 am

    I think the real take-away from these types of studies is: Always check for gorillas

  4. The Skeptical RNon 12 Feb 2013 at 9:22 am

    I saw the basket ball gorilla experiment and was among whom did not see anything unusual. Your blog post put a slightly different spin on the phenomenon for me personally. Upon reading the second paragraph the gorilla seemed to just pop on the screen like magic. I am not sure if that was you intention but , it made a strong impression. It may be interesting to conduct the study on ADD subjects to see the differences and test if hyper-focusing is influenced.

  5. Rikki-Tikki-Tavion 12 Feb 2013 at 9:33 am

    Maybe there was a gorilla in my shower this morning, but I was focusing too much on the soap… XD

    Whenever I or somebody close to me is in the hospital, I insist that a second physician, preferably the head of the respective department, has a look at the case before any major decision (operation/no operation/discharge/etc.) is made. I know this is obnoxious, but I owe my lower right leg to the fact that my mom does the same.
    When the head surgeon decided to operate after a fracture, the leg hat already started rotting under the cast.
    Both the first physician (a deputy director, I think) and the chief looked at the same x-ray images, and came to vastly different conclusions. So, as problematic as it is, logistically, I definitely think that more than one physician should independently look at any critical test result.

  6. elmer mccurdyon 12 Feb 2013 at 12:11 pm

    You might want to change the title. I found the gorilla because I looked for it.

  7. elmer mccurdyon 12 Feb 2013 at 12:20 pm

    …plus not looking for cancer because I don’t know how.

  8. mindmeon 12 Feb 2013 at 1:01 pm

    I did not see it until I was directed where exactly to look. I saw this posted on several facebooks. The problem to me is the white center mass is vaguely face shaped. I kept looking at that trying to figure out how people could interpret that as gorilla like.

  9. quarksparrowon 12 Feb 2013 at 2:25 pm

    @elmer mccurdy: I was specifically looking for a gorilla because of the title, and STILL didn’t see it! (Was expecting some sort of pareidolia-induced gorilla face popping out of the overall image, not an actual gorilla pasted in there — so the distraction of looking for the gorilla made me miss the gorilla!)

  10. SARAon 12 Feb 2013 at 3:26 pm

    Some form of checklist like the ones used by pilots might force a radiologist to take each step of review more carefully.

    If you do something repeatedly, you often develop a habit of “efficiency” quite unconsciously. I used to quality review work for large monetary payouts by checking processing, regulation and individual contract.

    It becomes easy to develop a sort of unconscious shortcut of looking for the common mistakes. You can also end up unsure whether you did check for certain things at the end of the process if you didn’t find any mistakes.

    It’s very hard to maintain that level of mindfulness for a complex but very familiar task. I had to have a checklist, and I forced myself to use it, although it took longer.

  11. Bill Openthalton 12 Feb 2013 at 6:29 pm

    Arguably, NOT seeing the gorilla means the brain is doing its job – filtering out obvious non-relevant distortions. In all likelihood, a skilled radiologist would have noticed a genuine problem, with the gorilla subconsciously filtered out as an irrelevant artifact. To me, this is similar to immediately noticing a single misspelled “it’s” on a sheet, but having to consciously, slowly and laboriously examine sections of photographs when playing “spot the differences”.

    The comparison with a driver not noticing pedestrians doesn’t hold – the problem there is the speed of the car not allowing sufficient time to scan the required field, combined with the assumption of rational behaviour on the part of other traffic participants. This, of course, assumes a driver concentrating on the traffic conditions, not one distracted by an activity such as texting.

  12. dogugotwon 12 Feb 2013 at 7:07 pm

    I heard about this on NPR the other day. In the NPR bit, part of the focus what ‘the question’ is. If you focus the user to something specific (number of basketball passes or indications of cancer), you cause the person to focus on that request so part of the trick is to rephrase the question in such a way that the person responding is open to the unexpected.

  13. ccbowerson 12 Feb 2013 at 9:04 pm

    “Not seeing a pedestrian or motorcycle in your path can be the same as missing the gorilla, but with dire consequences.”

    I agree with Bill somewhat on this point. While operating a vehicle, pedestrians and motorcycles are directly relevant to the task of driving, so a person’s attention should be focused on recognizing these as a normal part of that activity. Seeing a color-matched gorilla on chest CT is not as relevant to the task of looking for cancer on a such a scan. I do like the re-use of gorilla again as a variation on the basketball video.

    I guess it does make the point that complacency with regards to pedestrians and motorcyles (or whatever the object of concern is) could lead to dire consequences since it would result in a delay of recognition when something does go wrong. The fact that mentioning the gorilla makes it instantly recognizable may be helpful in preventing inattention blindness when it is harmful. Perhaps those “children at play” signs (or similar) could be helpful in this regard, although I suspect overkill of these types of warnings could lead to a blindness of the content of the warnings.

  14. curtison 12 Feb 2013 at 11:09 pm

    I looked for the gorilla without success for at least a minute after reading past the fold. It was nearly 100% invisible until I tilted my LCD monitor enough to change the contrast. That sort of contrast issue used to be the main reason why reading slides worked a lot better than reading digital images, but I’m not sure if the tech has caught up by now.

  15. xdton 13 Feb 2013 at 12:13 am

    “I think the real take-away from these types of studies is: Always check for gorillas”

    That sounds like something only a gorilla would say

  16. petrucioon 13 Feb 2013 at 2:38 am

    My father was diagnosed with lung cancer about two years ago. When I went to check his CT images of about 10 months before, I could easilly see the node at about 8mm at the time.

    I am aware that such things can pass unnoticed and radiologists are not perfect, but I worked at a company that made the very software used by them to aid in this task, and I know they usually get paid by productivity, which will obviouslly cause them to try to work faster and then miss obvious nodes.

    A Lung CAD module we were making at the time (Computer-Aided-Diagnostics) also detected that node without problems (that module was not yet available for the clinic at the time), and regularly detects A LOT of real nodes that radiologists miss – it was quite an eye openner for me.

    I am still struggling with a decision to sue the imaging clinic over this; it’s not about my father specifically, or getting any ficancial compensation for anything – but I think radiologists getting paid by the ammount of reports delivered is absurd, and if they are not getting sued over false-negatives, what is keeping them from working ever faster and faster?

    The take-home message: if you are doing a CT to screen your lungs, demand a clinic that uses a Lung CAD software. This task is just too tedius and error-prone for an unaided human to be able to perform on the same level of modern-day CAD software.

  17. BillyJoe7on 13 Feb 2013 at 6:46 am

    I was also expecting an example of pareidolia, but then noticed the gorilla anyway.
    (Hey, my iPad doesn’t recognise the word “pareidolia”)

  18. nybgruson 13 Feb 2013 at 9:13 am

    @sara:

    I absolutely agree. Atul Gawande does as well. We have this fear (or perhaps hatred? mistrust?) of checklists in medicine. Some feel it bogs them down unnecessarily. Some feel it impinges on their “clinical judgement.” I think it frees up my brain to think about the parts of medicine that are actually more complex, thus increasing my efficiency.

    @Bill Openthalt:

    I would say you are correct except that a radiologist is not just supposed to be looking for certain things, but to also be looking for any anomaly and then deciding whether it can be considered irrelevant or mundane. In many cases this becomes automatic and you could say that it wasn’t supposed to be noticed since it is irrelevant. However, that can only be reasonably applied unconsciously if you truly have seen the same thing and considered it irrelevant so many times in the past that it is now unconcsious. A gorilla pasted onto a CT should have been noticed and some sort of “error” registered in the brain of the radiologist. It wouldn’t fit into a known pathologoical finding, of course, but it also shouldn’t fit into a known non-pathological incidental finding. And that is the point here – that we demonstrate innatentional blindness in this case since the radiologist is supposed to have at least paused to notice the very anomalous gorilla in the lung.

    @ccbowers:

    I think I have to disagree with both you and Bill a bit. Yes, having less time to scan whilst driving does make it more likely to have inattentional blindness, but that’s still at base what it is. Giving the radiologists 30 seconds vs 30 minutes would have likely changed the percentage of those missing the gorilla, but that is because in the longer case they have more time to overcome the inattentional blindness. So when you say:

    Seeing a color-matched gorilla on chest CT is not as relevant to the task of looking for cancer on a such a scan.

    I’d have to disagree based on the above comments – it is part of looking for cancer to find any anomalies and thus the gorilla should have registered.

    The other common feature is how our brains actually process visual information. When your eyes move from one place to another they do so in “saccades” or rapid movement’s from one fixed focal point to another. If the brain actually processed every “frame” of vision between each point we would end up seeing motion blur. The reason we don’t is because the first “image” seen after the saccade stops becomes retroactively placed as every frame during the saccade. You can “see” this for yourself if you look at a large clock on a wall with a second hand. If you stare at the clock, then look away, then look back for an instant it seems like that second hand “hangs” longer than it should. That is because you are looking just as the second hand completed its motion, filled in the visual memory of the time it took you to make the saccade and then added that to the time it actually took the second hand to move, thus making it seem as if it stayed in the same position for longer than one second.

    So when a driver is scanning rapidly and doesn’t see a pedestrian (s)he may actually really have not seen the pedestrian! If the saccade included the pedestrian but the initial and final focal points do not contain the pedestrian, then the retoractive filling in of visual memory literally erases the existence of that pedestrian.

    In the CT image the same sort of thing could happen for the gorilla. An interested experiment would be to play around with the size of the gorilla. I also remember reading about a technology where you would wear glasses and look at webpages*. The glasses would track where your eyes focused, how long they focused there, etc. It would be very interesting to do this experiment again but with the radiologists wearing those glasses. Getting a robust baseline of non-experimental reads (i.e. just how radiologists look at films while working normally) would be beneficial to see if there are any particular patterns. Then doing this with the gorilla could potentially let us know if the radiologist missed the gorilla because (s)he never actually focused on it and thus separate out saccadic blind spots vs inattentional blindness.

    *this technology was developed early in the internet era to maximize the design elements of webpages to make them fit more with how people were actually trying to scan pages. Hence why so many pages look very similar (or at least have similar basic elements) and those that are very different are more difficult for us to navigate. In general, anyways.

  19. nybgruson 13 Feb 2013 at 9:13 am

    @BJ:

    (Hey, my iPad doesn’t recognise the word “pareidolia”)

    Maybe you should show it a piece of slightly burnt toast and see if the facial recognition sees Jesus.

  20. ccbowerson 13 Feb 2013 at 9:18 am

    I imagine that the less familiar a person is in looking at a scan like this, the more likely he/she will see the gorilla. Within radiology, I imagine that it is largely chance that determines whether a person sees the gorilla or not. Perhaps how a person evaluates the scan may impact whether or not the gorilla is seen, but I also wonder if seeing the gorilla correlates to skill in any way. Since the gorilla matches the surrounding area of the scan well, monitor/screen settings (as curtis mentoned) could also play a role. I am curious to see the actual study to help answer some of these questions

  21. ccbowerson 13 Feb 2013 at 9:54 am

    “I’d have to disagree based on the above comments – it is part of looking for cancer to find any anomalies and thus the gorilla should have registered. ”

    You can argue this, but seeing a gorilla image may not correlate at all with the specific pattern recognition needed to identify clinically relevant abnormalities. Keep in mind that the image of a gorilla is specific identification of an animal form, and it is not recognizable as an abnormality very relevant to the scan because the color distribution decently matches what is normal.

    To some degree the importance of what is being argued is an empirical question. What if we find that identifying the gorilla is completely unrelated (or worse inversely related) to skill in radiology, then what you are arguing is not true in any meaningful sense.

  22. ccbowerson 13 Feb 2013 at 10:01 am

    … the reason why I argue that is because I don’t think it makes sense to look for “any abnormality.” The brain has to scan in a more specific fashion than you are describing, which is likely superior if you have a decent idea of what you are looking for. I imagine it is problematic if the problem is different than what you are looking for, which is why this is important (not because a tiny gorilla is relevant).

  23. nybgruson 13 Feb 2013 at 10:09 am

    True enough CC, but the reality is that it shouldn’t matter if the gorilla is color matched or sort of fits with the pattern of the lung parenchyma. So does actual pathology.

    The fact that it doesn’t correlate with specific pattern recognition and is thus discarded as irrelevant is exactly the problem and the point. We cannot possibly hope to have perfect or complete pattern recognition, nor can we expect every radiologist to have the same pattern recognition algorithms in their brains. That is why it is vital for every anomaly to be noted and if it doesn’t fit in the pattern recognition it should be “flagged” and further investigated. Otherwise actual pathology that just doesn’t quite fit the pattern recognition, but is still anomalous and would have been read given enough opportunity, can slip past the reader just like the gorilla. The experiment indicates that pattern recognition is working well, but the part about noticing things that fail pattern recognition is not working.

    I imagine that the less familiar a person is in looking at a scan like this, the more likely he/she will see the gorilla.

    I agree with you. Because the familiar person gets those pattern recognitions locked in place and more readily ignores things that doesn’t fit it. And that is exactly the problem and exactly inattentional blindness.

    Also, I haven’t read the actual study but screens and rooms for radiology reads are set quite intentionally so that the monitor issues curtis mentions should be a controlled factor. Plus, the reading software that is used allows for very rapid changes in contrast, lucency, zoom, “windows” (which highlight different tissues differently) and are regularly used by radiologists. There should be no excuse for technical difficulties in reading the radiograph.

    To some degree the importance of what is being argued is an empirical question. What if we find that identifying the gorilla is completely unrelated (or worse inversely related) to skill in radiology, then what you are arguing is not true in any meaningful sense.

    Agreed. But in the absence of that data, lesser evidence and converging evidence will have to suffice for a tentative conclusion. The fact that there is only about 75ish% interobserver reliability in radiograph reads between trained radiologists plus all the facts we know about inattentional blindness and this experiment as well as plausibility make it reasonable to provisionally conclude that missing something like a gorilla in the CT is a marker for missing actual pathology on reads. Why exactly, how much it contributes, etc sure we can’t answer that. Your suggestion comparing the two is an excellent one. I think my eye tracker idea would also help shed some light on the matter as well.

    But what it all really boils down to for me is that pattern recognition is a great tool until it fails and leads you to make bad calls or miss things entirely. And built into your pattern recognition should be a mental “checkpoint” to flag things that don’t fit your pattern, regardless of what they are. You can still rapidly and easily write them off, but you shouldn’t completely miss them.

  24. nybgruson 13 Feb 2013 at 10:26 am

    the reason why I argue that is because I don’t think it makes sense to look for “any abnormality.” The brain has to scan in a more specific fashion than you are describing, which is likely superior if you have a decent idea of what you are looking for. I imagine it is problematic if the problem is different than what you are looking for, which is why this is important (not because a tiny gorilla is relevant).

    Well at least we independently honed in on the crux of our disagreement!

    I disagree for the reasons I stated above. Pattern recognition is a double edged sword. It does have to look in a specific fashion and do so in a rapid way. But there still should be a mental checkpoint for any abnormality.

    In fact, as we are taught how to read radiographs the very first thing every med student gets told is “Of course I don’t expect you to see everything or read it perfectly. I expect you even less to know what the abnormal finding is. But you should at least be able to see that there is something abnormal here.”

    That is the basis of how we are taught – at least recognize that there is an abnormality because it is better to question something and not know the answer (you can always look it up or phone a friend) than to completely miss it is there and lose any chance to interpret a finding.

    At first med students miss everything. Then they start missing less but at the same time seeing more that isn’t really there. Then they start missing more and seeing more that actually is there. And then, it would seem, the pattern recognition starts pigeonholing you and you begin missing things you would have seen back when you were an intern.

    Which posits another avenue of discovery here – compare attending radiologists to interns and residents and see if that makes a difference.

  25. ccbowerson 13 Feb 2013 at 10:55 am

    “I disagree for the reasons I stated above. Pattern recognition is a double edged sword. It does have to look in a specific fashion and do so in a rapid way. But there still should be a mental checkpoint for any abnormality.”

    Ok, well I guess we really don’t disagree. I don’t mean to say that we shouldn’t look for any abonormality, but that I think it is likely that experience improves our ability to see specific types of patterns, and this process will often result us in missing other patterns (because they are viewed as unimportant).

    I think that your point is that the patterns that are mentally categorized as unimportant may be actually important, and that is flaw. I don’t disagree at all, except that I think it is very possible that those who see the gorilla are not “better” at their craft. More relevant to that issue is if they can identify unusual or atypical, but clinically relevant abnormalities.

  26. ccbowerson 13 Feb 2013 at 11:52 am

    “That is the basis of how we are taught – at least recognize that there is an abnormality because it is better to question something and not know the answer (you can always look it up or phone a friend) than to completely miss it is there and lose any chance to interpret a finding.”

    I imagine that experience is functioning as a bias here: in that the more similar patterns that are seen, the more a person is able to recognize unusual patterns that they view as important. I think your point is that some of the patterns that may be mentally characterize as unimportant may actually be important.

    The gorilla helps make that point broadly, but the direct way to address the question is to see if real abnormalities (real life examples), perhaps atypical ones, are recognized. I don’t think we want to use tiny gorillas as a surrogate marker for something that we can measure directly, but I guess that isn’t the point of all this. You cite a 75-ish % for interobserver reliability, and it makes me wonder what the intraobserver reliability is, for a given scan separated by sufficient time?

  27. BillyJoe7on 13 Feb 2013 at 3:36 pm

    Would this solve the problem:
    Teach radiologists to imagine that the radiograph is normal and then to confirm that it is normal by looking at every portion of the radiograph and confirming that it is in fact normal (normal-pattern recognition). That should reveal any unidentifiable anomalies (WTF?) as well as pick up anything identifiably pathological (abnormal-pattern recognition) – because they don’t fit with what is recognisably normal.

  28. petrucioon 13 Feb 2013 at 4:39 pm

    It’s worth pointing out that many cancer nodes may look just like normal veins and arteries, and the radiologist must see the images above and below it to see if it moves like an artery would move, or if it grows and shrinks like a closed ball would.

    Was this test done with only this single image, or is this inside a set of dozens of images? I’d be surprised if they missed this high if it’s only this image.

  29. Bill Openthalton 13 Feb 2013 at 7:30 pm

    I think the brain can be trained to look for specific patterns, not for “any anomaly”. Reverting to my text example, I observe that my spotting errant “it’s” is a fully subconscious process — they just leap off the page. Scanning a text for words I don’t know is a slow conscious process. When we have trained ourselves to do something efficiently, it seems a lot more difficult to revert to the conscious process. I noticed this when trying to teach my son to drive; forcing my subconscious driving skills up to conscious level turns out to be far less easy and accurate than I ever expected.

  30. nybgruson 14 Feb 2013 at 7:09 am

    I don’t disagree at all, except that I think it is very possible that those who see the gorilla are not “better” at their craft. More relevant to that issue is if they can identify unusual or atypical, but clinically relevant abnormalities.

    I agree. And indeed there may be a very real possibility that the gorilla missing has nothing to do with actually missing pathology and could be utterly unrelated or even correlated by not causational and be pure coincidence.

    However, what this tells me is that either the radiologists didn’t see the gorilla at all (which means they didn’t actually look at that part of the CT scan) or that they saw it and subconsciously discounted it as irrelevant.

    In the former, I think we can agree that pathology could have existed there that they did not see.

    In the latter, it seems a failure on my part that they could see something as obviously out of the ordinary and discount it as irrelevant without it even crossing concsious thought.

    No matter how you slice it, without further data to suss out the potential confounders here, it seems reasonable to conclude that this is an example of genuinely missing something that shouldn’t be missed

  31. Calli Arcaleon 14 Feb 2013 at 1:20 pm

    I’m not a radiologist, but I have to point out that this is the third story I’ve read on this study and also the first where I actually spotted the gorilla. And I was looking for it. It’s horribly obvious now. How didn’t I see it before? I thought the same the first time I saw the gorilla video; I didn’t see the gorilla then either.

  32. ccbowerson 14 Feb 2013 at 2:39 pm

    “However, what this tells me is that either the radiologists didn’t see the gorilla at all (which means they didn’t actually look at that part of the CT scan) or that they saw it and subconsciously discounted it as irrelevant.”

    Yes, this is a key distinction and I agree with the statements that follow.

    “I thought the same the first time I saw the gorilla video; I didn’t see the gorilla then either.”

    The gorilla in the video was very obvious to my eyes (and brain), and I find it strange that people often don’t see it. I have watched a lot of basketball in my life, which likely impacts the results because counting the passes does not require as much attention. The gorilla in the scan took me a while, despite the hint in the title.

  33. Scott Kon 15 Feb 2013 at 1:54 pm

    I really wish you hadn’t mentioned the gorilla in the title. That kind of blows it for most people.

  34. Neuroradjackson 19 Feb 2013 at 8:19 pm

    I am a radiologist. A neurosurgeon friend of mine emailed me this image before I read Dr. Novella’s post. The jpeg was entitled “chest tumor.” My eye was initially drawn to the pulmonary nodule in the left lower part of the picture (which is actually the lower lobe of the right lung). Within seconds I discovered the dancing gorilla. I responded saying that he should have the monkey checked out and ignore the nodule.

    After my reply, my friend sent me the link to a BBC article about this study. I confess, I was really surprised that such a huge percentage of radiologists missed this “finding.” It makes me wonder, too, whether there was something to the study design that set the radiologists up to fail. I also wonder if most radiologists identified the pulmonary nodule and assumed that was the finding they were meant to identify. We in the field call this “satisfaction of search.”

    @SARA
    Radiologists do use a checklist of sorts. In my practice, we have templates for our reports that remind us to look at certain structures.

    @nybgrus
    We all have anecdotes about missed findings. I myself have missed my share of abnormalities. That said, much of the interobserver variability in radiology reports probably has more to do with style. Some radiologists mention every little thing. Others choose only to mention things they think are important. In the case of your patient’s “crush” fractures (assuming you mean compression fractures) if the patient did not have back pain it may have been assumed that these were chronic and therefore not clinically important.

    @petrucio
    I am sorry to hear about your father. It is true that an 8 mm nodule should be identified by a radiologist (btw, they are called nodules not nodes. Nodes are usually lymph nodes). We have looked into getting CAD for chest CT because it can in fact be very tedious to look for tiny little nodules all over the place. We have a similar program in use for detection of small calcifications in mammography. That said, your characterization of radiologists and their compensation is unfair at best. In my practice the radiologists are compensated based on the number of days worked without regard for productivity. I do not know a single radiologist who rushes through cases to make more money. In real practice, there are many, many interruptions (e.g., from ordering doctors, technologists, phone calls) throughout the day. These interruptions are likely a real contributor to missed findings.

    Forgive the rambling, I rarely comment. Thank you Dr. Novella!

  35. nybgruson 20 Feb 2013 at 8:31 am

    @neurodjacks:

    Indeed, I believe it was clear that I noted there may have been something to set up the failure, but it seems that the study actually tracked eye movements and found that the radiologists did actually look at that part of the screen. I can indeed understand not mentioning a compression fracture (we do use “crush” colloquially in my institution – perhaps it is because part of my education was in Australia?) if the radiologist thought it was chronic and not the point of the study. But not mentioning a gorilla? That still seems odd to me. I mean at the very least I would imagine it should register and make the radiologist laugh and realize immediately what the point of the study they were partaking was.

    As for the compression fractures and the interobserver variability – indeed I think you are at least partially correct. My patient was admitted for suspected pneumonia but in the study order “chest pain” was specifically noted. To me that would indicate noting the fractures. And even then, I’ve seen plenty of reads where obviously old findings are mentioned. If you have a patient with surgical clips in place on an abdominal scan those are (at least in my experience – perhaps you can tell me different?) pretty much always noted even though they are obviously not acute and we know exactly what they are and why they are there. By the same token that the compression fractures were omitted due to assumed chronicity shouldn’t surgical clips be omitted from the report? (That is a genuine question – perhaps there is some legitimate reason to note the latter and not the former and I don’t know about it since I am not even graduated from medical school quite yet).

    The differences in reporting styles almost certainly account for much of the interobserver variability. The question I would have then is wouldn’t it make sense to standardize the reporting and include chronic findings such that they can then be tracked and thus have more confidence in the read as well as confidence that there is no acute on chronic process? I know in certain cases comparison to old films is done and it is noted that there are or aren’t interval changes in chronic radiological findings – perhaps this is simply too time consuming and low yield to do routinely? (Once again, genuine question).

    In any event, thank you for your comments. I really am not expert in the nuances involved, but as I said above it seems to me that if a radiologist is actually looking at the gorilla (which the eye tracking data seems to indicate) then it should have been noted – after all I doubt chronic gorillas is a mundane finding to be omitted from reports :-P It just seems to me that the take home message from this is that anyone can become a little too focused on finding specific things to the exclusion of other findings. I seriously doubt it is horribly bad like many of the popular article titles imply or state outright, but something to be cognizant of and engender action to boost accuracy.

    Anyways, my rambling thoughts over coffee before heading off to clinic.

  36. ccbowerson 20 Feb 2013 at 9:43 am

    There are important details that are missing regarding this study that will be clarified when the study is published. One thing is clear: they did not just give them this one image and ask them if they noticed anything unusual, which maybe obvious, but this is what some articles were implying. They were given many scans only a few of which had gorilla images. I’m not sure if the gorilla images became clearer in later scans or if they just appeared, which may also matter

  37. HHCon 20 Feb 2013 at 2:46 pm

    Enjoyed reading this post about Mr. Drew’s study. Did the radiologists really miss the tiny angry gorilla? Perhaps they were having a bit of fun with Mr. Drew’s “medical” request. For example, how often do you complain about a typographic error to the author as long as the science writings makes sense?

  38. Neuroradjackson 20 Feb 2013 at 3:08 pm

    @nybgrus

    Indeed I continue to find it puzzling that radiologists would “look at” but not “see” the gorilla. I agree this may have to do with the task-specific attention that Dr. Novella suspects. I still think they would have stopped looking for anything else if/when they identified the nodule in the other lung. I very seriously doubt that anybody would identify the gorilla and simply choose to ignore it.

    I happen to be one of those radiologists who mentions every little thing :) . And I agree it should be standardized. At least in my practice we encourage all readers to use similar format and verbiage so the ordering clinicians can expect some consistency in our reporting.

    In your patient’s case, given the history of chest pain and lack of prior films for comparison, a radiologist should have identified and described vertebral compression fractures. If there are comparison radiographs and the fractures are not new, and if, for example, the patient just had a chest xray days before, I might not mention the fractures again. Given the additional details of your anecdote, it does seem more likely that the radiologist just didn’t look or didn’t see the fractures.

    As for the surgical clip question, it is not an issue of acuity or chronicity in this case. Patients are often poor historians and do not report prior surgeries. It can be helpful to document evidence of prior surgeries for this reason. I cannot tell you how often I’ve received requests to perform an ultrasound specifically to evaluate the gallbladder on patients who have had a cholecystectomy.

    I agree with Dr. Novella that recognizing bias and understanding how our eyes and brains process information will help at least identify if not prevent the potential for error. For now, I have raised my index of suspicion for unexpected safari animals on all imaging studies.

  39. nybgruson 20 Feb 2013 at 10:40 pm

    lol. Thanks for the response neuroradjacks.

    Honestly I think as a whole radiologists do a great job. Hell, I think doctors of all ilks do a great job. Obviously there are bad ones. I hope I didn’t come across as trying to single any group out. It merely seemed like evidence for an opportunity to improve and be aware is all. Of course from the still somewhat naive POV of a lowly 4th year med student, for whatever that is worth.

    @ccbowers:

    In the latest SGU Dr. Novella discusses this and states that the scan was a normal sequential CT scan with a gradual increase in opacity of the gorilla over successive slides. I do agree, however, that the potential significant confounder I see here (which neuroradjacks implied) was that the radiologists knew they were in a study, were told to look for nodules, and were singularly focused on doing that rather than a “normal” read. I can’t imagine that the study would have been so flawed as to explicitly set up that sort of failure, but it could have happened anyways.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.