It is to the credit of radiologists that in recent times we have begun to acknowledge the pervasive nature of error in our discipline But really, how could we not? A radiologist who spends most of his or her days reporting, commenting on, and discussing cross-sectional imaging -- as many of us do -- will come across instances of abnormalities that have been overlooked or misinterpreted with depressing frequency.
The more we undertake serial imaging and have the opportunity to review previous studies, the more this is the case. Recognizing the pervasive nature of error in our specialty is an important first step, but this greater awareness of our limitations will only serve a useful purpose if we can take the next step for our patients and start to reduce the number of errors that we make.1
All of us recognize the phenomenon of the abnormality that is glaringly obvious on one occasion but completely escaped our attention on another. Back in the days of film, I had a clinical colleague who would come and find me at my reporting desk flourishing a sheet of film saying: "What do you think of this?" Without missing a beat I would say, "There's a mass at the right apex/a lesion in the cingulate gyrus/a deposit in T6..." or whatever was appropriate to the film in front of me. "Hmm," he would say, and after a pause: "It's a shame that's not what you said the first time you saw this." And he would then read me my report on the study in question, which of course completely failed to mention the abnormality that had been so obvious to me a second before.
There is a great deal more work to be done to understand why we make mistakes in radiology -- specifically why our perceptual and cognitive functions seem to let us down so often -- but we already know enough to make a start on devising some approaches to prevention. I find it helpful to divide these into "this time" and "next time" strategies.
To improve the chance of an accurate interpretation of the study that I am currently reporting -- "this time" -- there are some basic and fairly obvious requirements. A well-trained radiologist, rested and alert, optimized viewing conditions, availability of relevant clinical information and previous imaging, protection from interruptions, a technically satisfactory examination, and so on.
Less attention has been paid to the part that might be played by improving image display and hanging protocols. For example, we know that detection of pulmonary nodules is improved by reviewing maximum intensity projection (MIP) images, so ideally our systems should present us with such images without requiring us to create them each time ourselves. The use of structured or template reports may also help by prompting us to review areas or structures that we might otherwise overlook, particularly if our attention is diverted by more obvious pathology -- the well-known phenomenon of "satisfaction of search."
The ultimate "this time" strategy is for the images to be reviewed by more than one pair of eyes and ideally for more than one person to contribute to the report. This is quite different from a system of double reporting undertaken at a later date for purposes of quality assurance. That is a "next time" strategy, see below, unless we believe that the knowledge a report will be scrutinized by another radiologist inevitably leads to an improvement in everyday performance.
I have written elsewhere about the problems of the definitive report on a complex study being issued after review by a single individual on a single occasion under uncontrolled circumstances.2 Surely we have enough evidence by now of both interobserver and intraobserver variability to accept this?3
The rapid progress we are seeing in the fields of artificial intelligence and machine learning will undoubtedly mean an increasing role for computer-aided detection (CAD) and in due course computer-supported interpretation. These tools may in the future take on the role of the "second reader," but there is some way to go before we understand how the interaction between human and machine can best benefit patients in radiology.
Into the category of "next time" strategies, I put everything that is designed to make us better radiologists in the future: the whole business of continuing education, professional development, and keeping up to date with advances in our specialty. Quality assurance processes, as mentioned above, may also help by highlighting specific or recurring patterns of impaired performance so that these can be addressed for the future.
Learning from discrepancies meetings (LDM)4 are another form of "next time" strategy, enabling us to learn from each other's errors and maybe to help prevent those few errors that are caused by a lack of knowledge as opposed to perceptual or interpretive deficiencies. LDM also play a wider part in contributing to a healthy and supportive department culture in which radiologists are encouraged to seek advice from colleagues and to share knowledge and information.
A healthy department culture has both "this time" and "next time" advantages. All of us find ourselves in the course of our work reviewing previous images and reports, and we are frequently in a position to inform the reporter of an earlier study about the outcome of further imaging, surgical intervention, or perhaps biopsy results. It is time that we took advantage of these opportunities to provide our colleagues with the feedback that will enable them to become better radiologists.
The role of feedback in improving performance has been established across many spheres of activity.5 Some readers may be familiar with the analogy of the golfer hitting balls in the dark: Deprived of the feedback of seeing where the ball has gone, she will not be able to make the technical adjustments required to improve. Too often, radiologists have been in the position of that golfer -- our reports go out and we have no idea whether or not they have hit the mark.
The U.K. Royal College of Radiologists (RCR) has rightly placed great importance on the potential role of feedback in improving our performance.6 Further guidance is expected shortly including suggestions for technical solutions. We should accept that we have a professional duty as radiologists to provide such feedback to each other when the opportunity arises and we will all become better radiologists as a result.
We will not eliminate error from radiology, but acceptance of its inevitability must not translate into complacency. We can and we must do more to understand why it happens and to use that knowledge to reduce it -- this time and next.
Dr. Giles Maskell is a radiologist in Truro, U.K. He is past president of the U.K. Royal College of Radiologists. Competing interests: None declared.
- Brady AP. Error and discrepancy in radiology: Inevitable or avoidable? Insights Imaging. 2017 Feb;8(1):171-182
- Maskell G. The practice of radiology needs to change. BMJ Opinion. 19 June 2017.http://blogs.bmj.com/bmj/2017/06/19/giles-maskell-the-practice-of-radiology-needs-to-change/. Accessed 2 July 2017. Abujudeh HH, Boland GW, Kaewlai R. Abdominal and pelvic computed tomography (CT) interpretation: discrepancy rates among experienced radiologists. Eur Radiol. 2010 Aug;20(8):1952-1957.
- Syed M. Black Box Thinking. Hodder and Stoughton. 2015
- Royal College of Radiologists. Standards for Learning from Discrepancies meetings.https://www.rcr.ac.uk/publication/standards-learning-discrepancies-meetings. Accessed 2 July 2017.
- Royal College of Radiologists. Quality assurance in radiology reporting: Peer feedback. https://www.rcr.ac.uk/publication/quality-assurance-radiology-reporting-peer-feedback. Accessed 2 July 2017.
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.