The idea of open and transparent outcome reporting has been around for a while now. Suggested drivers for such reporting have included service improvement and enhancing patient choice.
In this article, Dr Andrew Weatherall demonstrates why, to date, publicly available scorecard reporting has been met with scepticism by clinicians in Australia. In a nutshell, a focus on individuals’ negative outcomes does not necessarily improve care.
Measurement in a healthcare setting has a number of roles, but perhaps primary amongst them is driving service improvement. Purely reporting individual surgical or procedure complication rates has a limited, if any, role in this. Dr Weatherall suggests a number of ways forward including reporting of positive health outcomes and better administrative support for clinicians. Given the nature of modern clinical practice, one wonders whether measuring team outcomes may also be of greater value in driving improvement.
Andrew Weatherall writes:
I should start with a disclosure. I don’t dig scorecards. Not so much scorecards which are sort of objective like on a football field. I’m more troubled by the scorecards you get where they’re trying to capture the ‘vibe’ of the thing.
That’s probably because I made the mistake of watching the equestrian during the Olympics once. Or it’s the result of struggling with the judges’ scores in the gymnastics where they sift out the form on the ‘flat bags’. It’s all a bit unsatisfactory.
Then when I think from the other angle, I can see that it’s probably pretty hard to separate two horses trotting diagonally. How do you decide which horse is doing the best job of moving unnaturally?
At least it would be pretty easy to hand out scores for doctors, right? It’d have to be easier than grading horses, gymnasts or even sports that involve nose clips, sequins and underwater breath-holding.
Surgeons and Artistic Merit
Well recently in the US Propublica publicised a “Surgeons Scorecard” (you can see the entry page here and the accompanying story here). They took (US) Medicare billing records for inpatient hospital stays for a variety of common elective procedures from 2009-2013. They then compiled a list of complications you might expect related to those operations and cross-checked against individual surgeons’ results.
Then they put the complication rates out there. For 16,827 surgeons.
That’s good, right? Having information out there should help. Knowing complication rates would give patients data that’s important to their health care. Having information published should give surgeons an incentive to improve practice.
Well not everyone is excited by these scorecards.
In fact there are those who suggest that reporting on surgical complication rates to improve patient health care is a good way to erode good patient health care.
Does it make sense yet?
Dropping the Technical Difficulty
The counter argument is pretty much summed up in this piece from the New York Times and this commentary from Medscape. One argument leveled against scoring systems is that to avoid bad ratings surgeons or other proceduralists are driven to sift out the risky cases. Walking past those cases is perceived to be a way to walk to a better rating. Higher risk cases on the other hand guarantees more complications and a worse rating. Healthcare distortion by measurement.
It’s not that those with the texta and white cardboard in hand don’t recognise there are different risks in different situations. ProPublica have a discussion of their methodology posted alongside the other pages. Nobody disputes they’ve made an effort, but procedualists don’t always feel that enough allowance is made for all the other factors that might impact on complication rates.
It’s not just an overseas phenomenon either. The NSW Bureau of Health Information has recently released a report in its spotlight series looking at readmission to acute care occurring after any one of a range of initial health admissions. These were conditions such as heart attacks, strokes, heart failure, pneumonia and different varieties of orthopaedic surgery. The authors have attempted to factor in all the other things that might impact on patient health and lead to a need for readmission. Age, sex, comorbidities, private vs public hospital and socioeconomic status all get a look in. Just listing those gives some sense that it’s complex to measure. Whether they’ve got those relative risk considerations right will no doubt be up for scrutiny.
Even more significant is the use of data that is incomplete or incorrect. The Medscape piece even refers to the fact that the author’s own hospital has a scorecard entry for a cardiologist performing a knee replacement. This is not a thing. It’s hard enough to balance contributors to risk well but surely it’s impossible when you’re working off data that isn’t actual data.
Who gets the score right?
So are scorecards the way forward or do they just leave the highest risk patients at risk of not getting care? Should this reporting happen or not?
Well of course open reporting has to happen. Somehow. The balance might be hard to get right but reporting and understanding things that can be done better is kind of what healthcare is about. And for all the astonishment that comes with medical advances some of the biggest advances are actually from doing simple things better.
Here’s an example. Infections in the sorts of cannulas that go into the larger veins of the body are a major potential source of infection. That sort of infection can cause big problems. They can be associated with patient death. Over the last couple of decades it has become apparent these infections aren’t something we have to put up with either.
If you read through this Vox piece (based on US data and stories) one of the more interesting points is that sometimes when you examine a problem the solution can be pretty simple – a five point checklist instead of a list of 90 things to consider to mitigate. Giving health facilities an incentive to introduce those simple steps and watch their complication rates with public scrutiny has been part of reducing these infections. Public reporting can be part of good changes.
Better Scoring
It still has to be the right information though. The answer is not to axe the idea of scorecards but report it in a way that is accurate and doesn’t influence care in the wrong way. And it seems like too often the reporting is confined to complications not success rates at pulling off good results. Is it because positive health statistics take more effort to track?
Then there’s the other big story in tracking health information. For frontline workers it doesn’t always seem like we have the space to prioritise information gathering in our day to day work. We don’t necessarily get the support to design the reporting systems we’d like. Every time health hits the political pages you expect to see something about “frontline staff”. Apparently you can never threaten the numbers of frontline staff. If you work in an office you are fair game.
The thing is frontline staff need adequate time and support to do the other work – it’s the work that is spent doing stuff other than direct patient care that allows us to look at information, update practice, get education done and fix up policies.
Caring for patients better doesn’t always involve the magic paddles with the shouting of “clear” or machines that go bing. And every time someone in support is taken away clinicians take up general admin work and have less time to do the other stuff. Less time to look at the markers that matter.
Better measurement needs better support. When do we introduce a scorecard for whether healthcare facilities do that?
This post was originally published on Dr Weatherall’s blog, The Flying PhD.
Just a couple of bits of context for the comment about support. Most professional bodies would suggest clinicians need at least 25% of their time doing the “not face-to-face” stuff. Most clinicians I know (and yes I understand that methodology has flaws) feel lucky if they get 15-20% of their time. In my own spot, we share one clerical staff for the equivalent of 20 full time doctors (actually about 30 doctors). That’s about enough to get the day to day stuff going but these broader projects require time and resources.
I should also make it clear it shouldn’t just be expected that hospital or health care facilities will rain these resources down from on high. Clinicians of all types should probably get serious about pushing for them.