A series of articles on “right care”, published in the Lancet earlier this month, examines the extent of overuse and underuse of medical interventions and therapies worldwide, seeks to understand what drives inappropriate care, and proposes solutions.
All inappropriate care begins with a decision, whether subconscious or explicit, and it is to the origins of these decisions that the authors of the post below have turned their attention.
Professors Tammy Hoffmann and Chris Del Mar from the Centre for Research in Evidence-Based Practice at Queensland’s Bond University, recently conducted a systematic review of studies that assessed how good clinicians are at quantifying the benefits and harms of the interventions and treatments they are considering.
Their results suggest that clinicians, like their patients, are an optimistic bunch, with possible adverse consequences for “right care”.
Tammy Hoffmann and Chris Del Mar write:
Not long ago we systematically searched for and synthesised research that quantified how much benefit people (patients and the general public) estimated they would get from different treatments, screening tests, and diagnostic tests.
We also extracted information about people’s estimates of harm. Where possible we aimed to compare people’s expectations with the actual benefits and harms that are derived from research.
Patients’ optimistic expectations
The results were arresting: most people were excessively optimistic – they over-estimated the benefits and under-estimated the harms of medical care. This finding was broadly consistent across various tests, screens, and treatments; settings (primary care and hospitals); and countries.
Of course patients’ expectations about medical care are only one influence on the decision-making process: clinicians’ expectations are another. As we worked on our patient expectations reviews, we wondered – are clinicians better than their patients at accurately estimating the benefits and harms of medical care?
Last week we published our review of clinicians’ expectations: we screened 8,166 papers and found 48 that met our criteria. The included studies involved 13,011 clinicians (most were medical practitioners, although a few included pharmacists or nurses); came from 17 countries; and examined a range of treatments, tests, screening, and medical imaging.
The short answer to our question was, not really. Clinicians were poor at estimating the amount of benefit and harms of these interventions. When benefits were estimated, the majority of participants were correct for 11% (3/28) of the outcomes assessed. To put it another way, more than half of the clinician participants were inaccurate for 89% of the benefits they were asked to estimate. As for harms, the majority were correct 13% of the time (9/69 outcomes), and hence inaccurate for 87%.
Moreover, in the studies that looked at the direction of the inaccuracies, clinicians – rather like patients – tended to optimistically over-estimate the benefits of treatments, and under-estimate their harms. Most overestimated benefit for 7 (32%) and underestimated benefit for 2 (9%) of the 22 outcomes, and underestimated harm for 20 (34%) and overestimated harm for 3 (5%) of the 58 outcomes.
Why the inaccuracies?
Some of the reasons are obvious: clinicians may simply never have been taught the size of the benefits and harms of the interventions they provide/refer for (which is rather disturbing if that’s the case).
Clinical education (for doctors especially, but perhaps for all clinicians) has tended to focus on how treatments works (and what the disease processes are and the pathophysiological and mechanisms of treatment) often at the expense of empirical knowledge (that is, what works, and by how much).
Or clinicians may have been taught empirical information and either forgotten it, or not kept up-to-date with changes in knowledge. Keeping up-to-date is notoriously difficult, with new clinical knowledge exploding exponentially to unmanageable quantities, scattered over hundreds of different journals.
Less obvious causes include things like: ‘therapeutic illusion’ (described as an “unjustified enthusiasm for the treatment”) and natural resolution of illness (or a statistical quirk called ‘regression to the mean’) being mistaken for something effective the clinician did. Then there are a host of possible cognitive biases which probably play a role such as anticipated regret (worrying about missing a diagnosis or the chance to help with a potentially effective treatment) and commission bias (a tendency to want to do something for a distressed patient).
Finally, there are less palatable factors: medical-legal concerns (clinicians trying to protect themselves), and simple financial advantage (more return from providing tests and procedures than explaining why they may not be needed).
Inaccuracy impairs decision-making
Does this really matter? Yes, it does. Many clinical decisions occur in the ‘grey zone’ where the balance between benefits and harms are uncertain. Deciding about how to proceed needs consideration of both benefits and harms. Harms are notoriously ignored – in randomised trials, systematic reviews, information from commercial sources and the media, and the list goes on. A truly informed decision can’t be made if only benefits (or, only harms) are considered.
‘Harms’ include not only side-effects or the risk of complications, but also things like cost, inconvenience to the patient, and impact on daily routines and responsibilities. And of course, decisions about the benefit-harm balance of interventions can’t be approached in a ‘one size fits all’ manner. Each patient will bring different preferences, values, and circumstances into the situation – all of which need consideration during the decision process.
In a nutshell, if both patients and clinicians are bringing inaccurate expectations about how much interventions can help or harm into the decision-making process, then the potential for ill-informed decisions is incredibly high.
Patients may end up being unnecessarily tested and treated and receive low value care (overuse); or miss out on receiving effective interventions (underuse). In a series of articles about ‘right care’ that were published in The Lancet last week, patient and clinicians’ expectations were identified as contributors to the problem of both overuse and underuse of healthcare.
Towards ‘right care’
What’s the answer? This problem needs to be approached from many angles, with different causes (such as those listed above) needing different solutions.
One strategy is to encourage more widespread use of shared decision making, whereby clinicians and patients collaborate to make decisions about health care. This relatively new approach explicitly seeks to ensure that patients have the best estimates of benefits and harms of all the options available to them. It places the impetus on the clinician to know and be able to clearly communicate this information with the patient. Indeed, shared decision making can be considered to be the final step of evidence-based practice.
Once both parties understand the benefits and harms, the clinician and patient can explore what each option means for the patient, before together arriving at a decision about how to proceed. If it sounds a bit idealistic that this could occur in every consultation where it ideally should, it probably is – at least for now. But that doesn’t mean we shouldn’t be striving for this to be the norm.
Shared decision making is increasingly becoming viewed as an essential component of quality clinical care, and there are many policy, training, and resource implications of this (such as how do we use point-of-care tools to get accurate benefits and harms data to clinicians).
While shared decision making is not without its challenges, the alternative, in which health decisions are based on unrealistic expectations, cannot continue.
*Tammy Hoffmann is Professor of Clinical Epidemiology at the Centre for Research in Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, Gold Coast QLD. On twitter @. Chris Del Mar is Professor of Public Health at the Centre for Research in Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, Gold Coast QLD.