Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With all due respect there is so much absurdity in the assumptions being made with your linked article that it is almost not worth engaging with. However, I will for educational purposes.

As someone who is trained in and comfortable reading radiographs but is not a radiologist, I can tell you that putting a gorilla on one of the views is a poor measure of how many things are missed by radiologists.

Effectively interpreting imaging studies requires expert knowledge of the anatomy being imaged and the variety of ways pathology is reflected in a visibly detectable manner. What they are doing is rapidly cycling through what is effectively a long checklist of areas to note: evaluate the appearance of hilar and mediastinal lymph nodes, note bronchiolar appearance, is there evidence of interstitial or alveolar patterns (considered within the context of what would be expected for a variety of etiologies such as bronchopneumonia, neoplasia, CHF,...), do you observe appropriate dimensions of the cardiac silhouette, do you see other evidence of consolidation within the lungs, within the visible vertebrae do you observe appropriate alignment, do the endplates appear abnormal, do you observe any vertebral lucencies, on and on and on.

Atypical changes are typically clustered in expected ways. Often deviations from what is expected will trigger a bit more consideration, but those expectations are subverted during the course of going through your "checklist". No radiologist has look for a gorilla in their evaluation.

It is pretty clear that the layperson's understanding of a radiologist being "look at the picture and say anything that is different" is a complete miss on what is actually happening during their evaluation.

It's like if I asked you to show me your skills driving a car around an obstacle course, and then afterwards I said you are a bad driver because you forgot to check that I swapped out one of the lug nuts on each wheel with a Vienna sausage.



My dad is a radiologist and… not so dismissive of this study. Missing other conditions on a reading due to a focus on something specific is not uncommon.

Things like obvious fractures left out of a report.

https://cognitiveresearchjournal.springeropen.com/articles/1...

> For over 50 years, the satisfaction of search effect has been studied within the field of radiology. Defined as a decrease in detection rates for a subsequent target when an initial target is found within the image, these multiple target errors are known to underlie errors of omission (e.g., a radiologist is more likely to miss an abnormality if another abnormality is identified). More recently, they have also been found to underlie lab-based search errors in cognitive science experiments (e.g., an observer is more likely to miss a target ‘T’ if a different target ‘T’ was detected). This phenomenon was renamed the subsequent search miss (SSM) effect in cognitive science.


The study you linked effectively reinforces the points I made above. Given the search pattern used and the comments I made before about expectations maintained during a read, it follows that the described SSM effect is a source of errors.

Putting a gorilla on a view and then posting to NPR a sensationalized article about how "83% of silly radiologists just can't see what you managed to see" is not that.

In fact I would argue the SSM effect is present in many aspects of medical decisions and likely other industries. The other way to frame the SSM effect is to call it the "this case has the initial patterns of the thousands of other routine cases of disease X, so it is almost guaranteed to be disease X and I have to get home for my daughter's dance recital." effect. It's an optimization strategy that works most of the time.


Doctor here. Agree with this. Plus, when requesting an imaging study, we provide the patient’s history and list of differential diagnoses. The radiologist looks at the X-rays in that context.

Interpreting X-rays in isolation has little relevance to actual clinical practice.


Exactly. Everyone laments issues with provider shortages and resource issues, but when the people doing the work optimize their strategies to get through their work faster, we get dumb articles like the NPR one linked above.

As helpful as it is for rads to catch unexpected incidental findings, that's not the point. If I'm waiting on a stat read, I've got specific questions I'm looking to have answered quickly. I literally don't want the radiologist wasting time hunting for random oddities if it delays getting the reads back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: