The performance of hospitals is again in the news, thanks to the Australian Institute of Health and Welfare’s release today of Australian Hospital Statistics 2009-2010. You can download the full report here, and the Institute’s own summary is reproduced at the bottom of this post.
It seems, on an admittedly quick reading, that the bulk of the report’s focus is on throughput and process, rather than safety and quality of care, or patient outcomes. I don’t know about you but if I was contemplating elective surgery, I’d be at least as interested in my chances of picking up an infection in hospital as in how long I have to wait for the operation.
Meanwhile, Philip Davies, Professor of Health Systems and Policy at the University of Queensland, warns against reading too much into the measure that has already popped up in newspaper headlines – elective surgery waiting times.
***
Let’s move on from our obsession with surgery waiting times
Philip Davies writes:
Today sees the publication of AIHW’s latest report on Australian Hospital Statistics. Doubtless it will trigger another round of journalistic hand-wringing and governmental self-congratulation as we try to figure out how well our public hospitals are performing.
One statistic that will inevitably be the focus of attention is the increase in median waiting times for elective surgery from 32 days in 2005–06 to 36 days in 2009–10.
According to Adam Creswell, writing in today’s Australian, that’s “putting the effectiveness of the federal government’s hospital rescue measures under renewed scrutiny”. Or does it merely confirm that “hospitals are targeting those patients who have been waiting the longest”, in a quote that Adam attributes to a spokesman for Minister Roxon?
It’s encouraging to see the debate about access to elective surgery has moved from focusing on the size of the waiting list and now considers how long people have to wait. The size of a waiting list is irrelevant unless we know the rate at which people are joining and leaving it.
But is our apparent obsession with waiting times really any better?
It’s long been acknowledged that the length of time someone will have to wait for surgery affects the likelihood that they will join a waiting list. Waiting time figures reflect the demand for surgery and not the need. There are no fixed criteria that dictate whether or not a patient should be referred for elective surgery.
Evidence suggests that GPs are less likely to refer patients to hospital when waiting times are longer and are more likely, instead, to continue to manage their conditions in the primary care setting. That, in turn, means that waiting times tend to be ‘self-limiting’: as they increase, the apparent ‘demand’ for elective surgery falls which, in turn, means waiting times come back down again.
There are other factors at work too. A 2003 report (PDF alert) into waiting times in OECD countries suggested that longer waiting times might encourage more potential patients to use private hospitals, or drive public hospitals to make better use of available capacity: both factors that would reinforce the feedback loop from longer waists to an apparent drop in demand for public hospital surgery.
In short, variations in waiting times are largely meaningless as a measure of hospital performance. We should treat them with both caution and scepticism. Let’s hope they don’t feature too prominently in the new National Health Performance Authority’s performance indicators for public hospitals.
***
More detail on the report…
Below is the Institute’s summary of the report – interestingly, it makes no significant or explicit mention of safety and quality performance indicators, and neither does the press release.
Summary
There were 1,326 hospitals in Australia in 2009–10. The 753 public hospitals accounted for 67% of hospital beds (56,900) and the 573 private hospitals accounted for 33% (28,000), these proportions are unchanged from 2008–09.
Accident and emergency services
Public hospitals provided about 7.4 million accident and emergency services in 2009–10, increasing by 4% on average each year between 2005–06 and 2009–10. Overall, 70% of patients were seen on time in emergency departments, with 100% of resuscitation patients (those requiring treatment immediately) being seen within 2 minutes of arriving at the emergency department.
Admitted patient care
There were 8.5 million separations for admitted patients in 2009–10—5.1 million in public hospitals and almost 3.5 million in private hospitals. This was an increase of 3.2% on average each year between 2005–06 and 2009–10 for public hospitals, and 5.0% for private hospitals.
The proportion of admissions that were ‘same-day’ continued to increase, by 5% on average each year between 2005–06 and 2009–10, accounting for 58% of the total in 2009–10 (51% in public hospitals and 68% in private hospitals). For overnight separations, the average length of stay was 5.9 days in 2009–10, down from 6.2 days in 2005–06.
About 4% of separations were for non-acute care. Between 2005–06 and 2009–10, Rehabilitation care in private hospitals increased by 19% on average each year and Geriatric evaluation and management in public hospitals increased by 11% on average each year.
Readmissions to the same public hospital varied with the type of surgery. There were 24 readmissions per 1,000 separations for knee replacement and 4 per 1,000 separations for cataract surgery.
Elective surgery
There were 1.9 million admissions for planned (elective) surgery in 2009–10. There were about 30 separations per 1,000 population for public elective surgery each year between 2005–06 and 2009–10; rates for other elective surgery increased from about 49 per 1,000 to 55 per 1,000 over that time. Half of the patients admitted for elective surgery in public hospitals waited 36 days or less after being placed on the waiting list, an increase from 32 days in 2005–06.
Expenditure and funding
Public hospitals spent about $33.7 billion in 2009–10. Adjusted for inflation, expenditure increased by an average of 5.4% each year between 2005–06 and 2009–10. In 2008–09, states and territories were the source of 54% of funds for public hospitals and the Commonwealth government funded 38%. This compared with the figures of 54% and 39%, respectively, in 2007–08.
Between 2005–06 and 2009–10, public patient separations increased by 2.8% on average each year, those funded by private health insurance increased by 6.4%, while those funded by the Department of Veterans’ Affairs decreased by 1.3%.
***
Meanwhile, the AMA press release, titled “Public hospitals – not much bang for the big bucks”, says the report shows “the Government’s spending on public hospitals has delivered a very small return on a huge investment over four years”.
While we’re on the subject of “return on investment”, what about some serious analysis of health returns on the investment in the Medicare Benefits Schedule (and the earnings differential within medicine), or of the health returns on the investment in private health insurance incentives, or the relative health returns on investment in hospital spending versus primary health care spending versus population health interventions…
***
Update: Thanks to Lisa Ramshaw (see comments below) for pointing out the relevant section of the report (from p 29) re adverse events:
Performance indicator: Adverse events treated in hospitals
Adverse events are defined as incidents in which harm resulted to a person receiving health care. They include infections, falls resulting in injuries, and problems with medication and medical devices. Some of these adverse events may be preventable.
Hospital separations data include information on diagnoses, places of occurrence and external causes of injury and poisoning that can indicate that an adverse event was treated and/or occurred during the hospitalisation. However, other diagnosis codes may also suggest that an adverse event has occurred, and some adverse events are not identifiable using these codes.
In 2009–10, 4.9% of separations reported an ICD-10-AM code for an adverse event. The proportion of separations with an adverse event was 5.8% in the public sector and 3.7% in the private sector (Table 3.5). The data for public hospitals are not comparable with the data for private hospitals because their casemixes differ and recording practices may be different.

In the public sector, about 55% of separations with an adverse event reported Procedures causing abnormal reactions/complications and 34% reported Adverse effects of drugs, medicaments and biological substances.
In the private sector, about 71% of separations with an adverse event reported Procedures causing abnormal reactions/complications and 26% reported Complications of internal prosthetic devices, implants and grafts.
The data presented in Table 3.5 can be interpreted as representing selected adverse events in health care that have resulted in, or have affected, hospital admissions, rather than all adverse events that occurred in hospitals. Some of the adverse events included in these tables may represent events that occurred before admission. Condition onset flag information (see Appendix 1) could be used in the future to exclude conditions that arose before admission and to include conditions not currently used to indicate adverse events, in order to provide more accurate estimates of adverse events occurring and treated within single episodes of care.
Performance indicator: Unplanned/unexpected readmissions within 28 days of selected surgical admissions
‘Unplanned or unexpected readmissions after surgery’ are defined as the number of separations involving selected procedures where readmission occurred within 28 days of the previous separation, that were considered to be unexpected or unplanned, and where the principal diagnosis related to an adverse event (see above). The measure is regarded as an indicator of the safety of care. It could also be regarded as an indicator of effectiveness of care; however, the specifications identify adverse events of care as causes of readmission, rather than reasons that could indicate effectiveness.
Rates of unplanned or unexpected readmissions were highest for Hysterectomy (31 per 1,000 separations) and Prostatectomy (30 per 1,000) (Table 3.6). For Cataract extraction, fewer than 4 in 1,000 separations had a readmission within 28 days.

Interesting – why, I wonder, is the rate of unplanned readmission so much higher after prostatectomy and hysterectomy? How do these rates compare with other common procedures not mentioned in the table? And why are people more likely to have an unplanned readmission after knee replacement than after hip replacement? Do we know if these indicators are improving?
The detailed report does cover safety & quality indicators including infections and falls. Of the 10 indicators of adverse events where public and private hospitals are directly compared, the rate is lower (better) in private hospitals for all 10 indicators.
The report (and the press release) also make it clear that over the last five years, growth in elective surgery in private hospitals has outstripped public hospitals. While Crikey downplays waiting lists and waiting times as measures of system performance (which begs the question, why do we measure it?), imagine how many more people would be waiting for life-saving or life-improving surgery were it not for the contribution of private hospitals in treating 3.5 million patients each year.
Last year the Productivity Commission found that private hospitals were less costly (on a like-for-like basis), had a more complex casemix and performed better on comparable safety & quality data than public hospitals. There can be no doubt that on this basis, the investment in private health insurance incentives provides a better return than the billions of dollars of blank cheques currently being written for public hospitals.
Lisa Ramshaw, Australian Private Hospitals Association
Thanks Lisa,
I did see the falls table (P 346 for interested readers) but couldn’t find hospital-acquired infections – though it was admittedly a very quick scanning by me. If you’ve a page number for these, please let us know. Cheers, Melissa
Hi Melissa,
Take a look at table 3.5 – Separations with an Adverse Event on page 30.
Cheers,
Lisa
Thanks, I’ve updated the post. If you Lisa or other readers have any ideas re the questions posed at the end of the update, please let us know..Cheers, Melissa
A major impediment to public accountability, is that bureaucrats, state and federal, refuse to tie down reporting and data definitions.
An “urgent” elective surgery wait should apparently be treated within 30 days. But 30 days between when and when? Because the start of the wait is not clearly defined, hospital administrators interpret this as being 30 days from when the clerk enters the referral into the system, regardless of when the clinical decision was made! If CEOs want a better chance of meeting their targets, their staff might hold onto the referral for say a week between the clinical decision and when it gets entered into the system. Thirty days is now 37. Some hospitals will not put an “urgent” patient onto the list at all until a surgery date is locked in.
The next rort (albeit smaller) is working out when the wait ends. If the patient actually got the awaited procedure, you’d be forgiven for thinking that the wait ends on the day the patient gets the procedure. Not necessarily. If the patient was admitted before the day of surgery, the wait ends on the day of admission. Now your 30 day wait has blown out to 38 days but can still be legitimately reported as 30 days.
But wait. There’s more. If the patient looks like going over their recommended safe waiting time, a clever hospital administrator will deem that the patient was “not ready for care” (NRFC) for a period of time. There are no uniform clinical guidelines for deeming a patient to be NRFC. There is no requirement to provide the patient and their surgeon with written evidence that this decision has been made (and who made it, and why it was made). So if the patient is at day 28 and theatre is booked solid for the next week, just make the patient NRFC for a week. They won’t mind. They won’t even know. Now the wait is 30 days plus seven at the start plus another seven days NRFC plus another day between admission and surgery and you have a 45 day wait that can legitimately be reported as a 30 day wait. Surely there’s no other fudging that our redoubtable bureaucrats allow…
Well in fact there are numerous others. There is no uniformity required from hospital to hospital and clinician to clinician, in relation to what urgency category should be assigned. So if CEOs are struggling to meet targets, pressure will be brought to bear on waiting list managers and clinicians to assign a less urgent status to a patient than they really should have. Patients who really should only wait up to 30 days, can end up hidden in the group who can wait up to 90 days.
Why would bureaucrats allow such obvious loopholes to fester? It’s a toxic combination of ego and inertia. Big government departments like inertia. They change slowly, and trying to get agreement on anything between 9 health departments – six state, two territory and a federal – is a challenge. The other barrier to change is that the mandarins in government departments and hospitals are never going to willingly agree to changes that make their personal performance look worse. It is far easier and more palatable to keep things ambiguous and hide the problem from the public.
Hallelujah, Philip! The nonsense of the waiting list obsession spelt out. It is not just in Oz but a world wide phenomenon – totally utterly silly. Except of course if one is waiting. But that is the point surely. There are two issues here. The patient’s and the hospital performance’s. Waiting times do matter for patients and quite rightly. For hospital performance? No. What matters for hospital performance is output – quantity and quality i.e. what hospitals do. The really daft thing about waiting lists and times is that they relate to what hospitals are failing to do!
I can’t think of any other sector of the economy that is judged on what it does not do.