Demand management, responsible requesting, appropriate referring, clinical vetting -- call it what you like: Managing the demand for medical imaging is a hot topic. When it’s cheaper, easier, and apparently more objective to get a scan than to get a senior and holistic medical opinion, the demand for imaging will only increase.
Whether demand management is a good or a bad thing depends on your point of view. When I was a trainee, a colleague told a story about a placement he had done in a large hospital in the U.S. After vetting a sorry litany of poorly justified inpatient ultrasound requests by chucking a third of them in the rubbish bin -- as was his normal practice in the U.K. National Health Service (NHS) -- he was told by his senior physician (in the strongest terms and with a healthy smattering of agricultural language) that he had cost the department $25,000 in one morning and to please cease and desist.
On the other hand, in the increasingly austere environment of U.K. healthcare, the catchy "supporting clinical teams to ensure diagnostic test requesting that maximizes clinical value and resource utilization" is an important component of the effort to increase the productivity of the service.
This remains true, even if the inexorable increase in demand -- driven by increasing hospital attendance, direct requesting from primary care, and widening indications for complex imaging such as CT and MRI -- is unavoidable, and perhaps mandated.
Responsible requesting
What levers can we put in place to try to ensure responsible requesting?
There are some lessons we can learn here from two Nobel laureates: Daniel Kahneman and Richard Thaler. Kahneman won the Nobel Prize in Economics in 2002 for his work on prospect theory, and Thaler won the 2017 Prize for his work on behavioral economics. Their work on how people make decisions has implications for radiology requesting.
In his book, "Thinking Fast and Slow," Kahneman describes the multiple biases and cognitive traps that distort the way we think and form judgements. One of these is "base rate neglect," in which we make narrative judgements about individual cases without thinking about the statistical likelihood of that judgement being correct.
One of the simplest examples of this in his book is the following scenario:
A young woman is described as a "shy poetry lover." Is she more likely to be a student of Chinese or business administration? The base rate (numbers of people who study the two subjects) tends to suggest the latter, and the fact that she is female and is a "shy poetry lover" tells you nothing of objective relevance as to which subject she chose.
But which subject jumped into your mind? Worse than this, even when we are made aware of the base rate, we tend to ignore this information. We continue to do this even when we are also reminded of the tendency to neglect base rate. You'll probably find, even now, you cannot quite shake the image of the young woman studying Chinese.
In a radiology requesting context, the base rate might be the statistical likelihood of a specific diagnosis. It might also be the rates of requesting of an individual, department, or hospital group relative to relevant peers ("over-" or "under-requesting") or other averaged metrics.
But what the base rate neglect phenomenon tells us is that this information is irrelevant when a clinician forms a judgement about whether to request imaging. The referrer's thought processes create a narrative image of representativeness (of a patient's presentation and a likely diagnosis) that may be completely divorced from the statistical likelihood of that diagnosis -- hence referrals with the irritating query: "please exclude..."
Similarly, colleagues who are informed of their imaging practice relative to peers are unlikely to weigh this information when making decisions about imaging a specific patient at the point of requesting. If our goal is behavior change, the base rate neglect phenomenon tells us it's pointless to describe the base rate, to use nonbinding clinical decision rules describing the likelihood of a particular diagnosis, or to spotlight systemic over-requesting relative to peers. This information simply will be neglected, often subconsciously. This is not how anyone, clinicians included, makes decisions.
Even for conscious thought processes, there will always be reasons why experts feel their judgement will outperform an algorithm (or a decision rule), despite evidence that they frequently do worse. Kahneman describes this as the illusion of skill, though there is some debate about the extent of this illusion and about the added value of expertise. However, when perceptions of skill are intimately bound to doctors' social role and idea of personal worth, it is singularly difficult for them to accept algorithmic decisions that undermine these perceptions and the utility of their subjective judgement.
The Thaler perspective
What else might work? Binding decision rules (e.g., not being allowed to request an imaging test unless certain criteria are met) and strict clinical pathways can help, though they can be proscriptive and rarely result in less imaging.
It is here that the work of Richard Thaler might be of value. In his book, "Nudge," he describes how people can be encouraged to make better decisions by careful design of the systems within which those decisions are made -- something he describes as "choice architecture." A simple example is auto-enrolment in pension schemes: The design (architecture) of the choice offered (stay in or opt out) favors enrollment over an alternative way of presenting the choice, such as asking people to enroll themselves (opt in or opt out).
In radiology requesting, decision rules could default to a particular scan type in particular clinical scenarios; information could be presented about relative cost, complexity, patient discomfort or radiation dose; alternatives could be suggested, including senior clinical review; imaging choices could be limited by requester seniority or prior imaging studies; and duplicate requests could be flagged.
None of this requires complex software or logic development, and much of the work has already been done (e.g., the iRefer resources from the U.K. Royal College of Radiologists). The critical step is to embed these resources into the requesting system choice architecture at the point of an imaging request. Referrers still have all options available to them, but they have to consciously decide to override a recommendation and consider the consequences of their choice.
Finally, both Kahneman and Thaler emphasize the importance of feedback in affecting behavior. To learn, feedback needs to be timely, personal, and specific. That is why we learn quickly not to put our finger into a flame but find it difficult to reconcile our individual contribution to global warming.
Although there are obvious difficulties with the lost art of imaging vetting by intimidation (sighing deeply while tearing into small pieces the request card of a quaking junior doctor), it certainly provided the opportunity for immediate feedback and learning, especially if accompanied by a patient explanation.
Vetting undertaken remotely (spatially and temporally) means this feedback is diluted, making it much less likely that learning will occur and requesting behavior will alter. If departmental processes and electronic systems can be designed to allow prompt feedback to the requester that an imaging request has been rejected, or at least needs more discussion, this is much more likely to shift behavior and improve quality, even for an itinerant and ever-changing requester workforce.
There are other ways in which radiology demand can be managed: imaging protocols (e.g., follow-up after cancer or surveillance imaging) can be revised, financial disincentives can be created to suppress imaging use, and waiting lists can be reassessed and validated. Some of these methods are more acceptable clinically and ethically than others.
What human psychology and decision science tell us is that in order to alter requesting behavior and culture, nudges, feedback, and narrative story are more likely to get results than generic exhortations to reduce imaging use or to consider base rates and statistical probability.
There are simple wins and the information technology systems facilitating these nudges and feedback need not be complex.
Dr. Chris Hammond is a consultant vascular radiologist and clinical director for radiology at Leeds Teaching Hospitals NHS Trust, Leeds, U.K.
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.