Understanding Hospital Mortality Indicators Paul Aylin Clinical Reader in Epidemiology and Public Health Dr Foster Unit at Imperial College London Dec 2013
Contents Why monitor mortality? HSMR: effect of changing its construction What does the HSMR mean? The SHMI HSMR and SHMI compared Individual diagnosis/procedure based SMRs Interpretation of hospital mortality figures
What to measure Process vs. Outcome Debate simmering!!
The case FOR measuring outcomes It really matters to the patient Common endpoint in RCTs etc for assessing treatments A ‘hard’ i.e. objective measure Often readily available in routinely collected data
And the case AGAINST Casemix adjustment is needed for fair comparison and is difficult Can be affected by artefact, e.g. deaths post discharge, inter-hospital transfers All-cause mortality is not the same thing as preventable mortality Attribution: which hospital?
Hospital Standardised Mortality Ratio Originally developed by Brian Jarman Jarman et al. “Explaining Differences in English Hospital Death Rates Using Routinely Collected Data,” BMJ 1999;318: Covers diagnoses leading to 80% of all in- hospital deaths (56 dx groups) We use a set of 56 casemix adjustment models
Florence Nightingale Uniform hospital statistics would: “Enable us to ascertain the relative mortality of different hospitals as well as of different diseases and injuries at the same and at different ages, the relative frequency of different diseases and injuries among the classes which enter hospitals in different countries, and in different districts of the same country” Nightingale 1863
Purpose and rationale of (H)SMRs Many studies show link between quality of care and mortality 5% of in-hospital deaths are preventable through case note review – but will vary by hospital Can’t spot these in routine data, so monitor all deaths Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf Sep;21(9): doi: /bmjqs
Process vs. Outcome Review of relation between quality of care and death rates (36 studies): 26/51 processes: good care -> low rates 16/51 processes: no relation found 9/51 processes: good care -> high rates… Pitches D, Mohammed MA, Lilford R. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality of care? A systematic review of the literature. BMC Health Serv. Res 2007;7:91
HSMR construction Build one regression model per dx group Attach a predicted risk of death to each admission based on their age, sex etc Sum them up by dx group to get the expected number of deaths E Compare this with the observed deaths O: SMR for a diagnosis (CCS) group = O/E HSMR = (sum of 56 sets of O) / (sum of 56 sets of E) x100
Interpretation 100 is national average HSMRs>100 mean more deaths than predicted O minus E is NOT number of preventable deaths. Reasons? N, O, P, Q, R Noise in the data Organisation factors – transfers, step-down Patient factors – unaccounted-for casemix Quality of care – only after excluding others Random variation
Current casemix adjustment model for each dx group HSMRs are adjusted for: Age (<1, 1-4, 5-9, …, 85-89, 90+) Sex Elective status Areal socio-economic deprivation (Carstairs) Diagnosis group and subgroups Co-morbidity – Charlson index (switching to Elixhauser + dementia) Number of emergency admissions in previous 12 months Palliative care (secondary dx code or specialty) Year of discharge Month of admission Source of admission, e.g. pt’s own home, other hospital, care home
HSMRs with and without adjusting for Charlson
Palliative care – with and without adjustment
56 diagnosis groups HSMR vs. all 259 diagnosis groups
Excluding zero-day emergency survivors
An HSMR by month (random hospital)
…and the same by quarter: less fluctuation
The SHMI: Summary Hospital-level Mortality Indicator Owned by the NHS Information Centre Developed from DH Mortality Technical Working Group Informed by limited further modelling commissioned by NHSIC Ongoing review Intended to be supported by the private sector who help Trusts understand their data
SHMI v HSMR SHMIHSMR Adms and deaths included All in-hospital deaths for inpatients and day cases and deaths 30d post discharge For all diagnosis supergroups except births, stillbirths All in-hospital deaths for inpatients and day cases For the 56 diagnosis (CCS) groups accounting for 80% of deaths Risk adjustment Age group Comorbidities Admission method Gender Discharge year (1-3) Models based on just three years of data and rerun each quarter Diagnosis groups much broader Age group Comorbidities Admission method Gender Palliative care coding Diagnosis/procedure subgroup Deprivation Number of previous emergency adms Discharge year Month of admission Source of admission Models based on 11 years Includes interaction terms between age and co-morbidity Assigning deaths The last acute provider in superspell Every provider in the superspell prior to death
Casemix HSMR casemix adjustment seems to account for more variation 30% more trusts above 99.8% control limit for SHMI
Do we need both? Neither is “right” Neither is “wrong” As they look at the situation from a different perspective, they are both useful, if you understand the perspective: “multiple lenses” (Berwick) Other quality measures also needed
Reasons for high or low values Noise in the data Organisation factors – transers, step- down, on-site hospice Patient factors – unaccounted-for casemix Quality of care – only after excluding others Random variation
What can HSMRs and SHMIs tell us? High (or rising) suggests potential quality of care issue (Mid Staffs) Some of this could relate to other parts of the care pathway, e.g. primary care …or failure to submit all the data (e.g. no secondary dx coding) Falling value suggests potentially care improvement or various artefacts
What can HSMRs and SHMIs not tell us? If there is definitely a problem in the hospital If hospital is good/safe in all clinical areas The number of preventable deaths What to do next other than “investigate”
Some suggestions for investigating your SHMI Split it by hospital site Break it down into dx-specific SMRs Check electronic then paper data Look at place of death Pathways – transfers, LCP Weekday v weekend, elec v emerg Audit etc as usual
Keogh Review 14 hospital trusts covered by the review were selected using national mortality measures as a "warning sign" or "smoke-alarm" for potential quality problems 11 of the 14 trusts were placed into special measures by Monitor and the NHS Trust Development Authority.
In summary HSMR Summary figure, developed by Imperial and produced by DFI Observed/Expected for 56 diagnosis groups accounting for 80% of all in- hospital deaths Screening tool, so various possible reasons for high/low figure Not a measure of avoidable deaths Detailed breakdowns and analyses available SHMI Summary figure produced by Information Centre Also a screening tool and not a measure of avoidable deaths More limited casemix adjustment for dx, age, sex, emerg/elec, Charlson comorbidity Some minor differences cf HSMR (100% vs 80% admissions) Some larger differences cf HSMR (out of hospital deaths, attribution of death to final acute centre)
Detecting outliers “Even if all surgeons are equally good, about half will have below average results, one will have the worst results, and the worst results will be a long way below average” Poloniecki J. BMJ 1998;316:
Adjusted (EuroSCORE) mortality rates for primary isolated CABGs by centre (3 years data up to March 2005) using SCTS data with 95% and 99.8% control limits based on mean national mortality rates
Funnel plots No ranking Visual relationship with volume Takes account of increased variability of smaller centres …. but not as useful for continuous surveillance and less sensitive to sudden increases in mortality
Risk-adjusted Log-likelihood CUSUM charts STEP 1: estimate pre-op/admission risk for each patient, given their age, sex etc. This may be national average or other benchmark STEP 2: Order patients chronologically by date of operation STEP 3: Choose chart threshold(s) of acceptable “sensitivity” and “specificity” (via simulation) STEP 4: Plot function of patient’s actual outcome v pre-op risk for every patient, and see if – and why – threshold(s) is crossed
More details Based on log-likelihood CUSUM to detect a predetermined increase in risk of interest Taken from Steiner et al (2000); pre-op risks derived from logistic regression of national data The CUSUM statistic is the log-likelihood test statistic for binomial data based on the predicted risk of outcome and the actual outcome Model uses administrative data and adjusts for age, sex, emergency status, socio-economic deprivation etc. Bottle A, Aylin P. Intelligent Information: a national system for monitoring clinical performance. Health Services Research (in press).
How do you investigate a signal?
Imperial College Mortality alerts Look at alerts generated at 0.1% statistical False Alarm Rate (default in Real Time Monitoring is 1%) Write to trusts with doubling of odds of death over previous 12 months
Francis report 2013 Recognised the role that our work on HSMRs and our surveillance system of mortality alerts had to play in identifying Mid Staffs as an outlier Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry Volume 1. Pages “All healthcare provider organisations should develop and maintain systems which give effective real-time information on the performance of each of their services, specialist teams and consultants in relation to mortality, patient safety and minimum quality standards.” Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry Executive Summary. Recommendation 262: “Summary hospital-level mortality indicators should be recognised as official statistics. ” Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry Executive Summary. Recommendation 271:
Further reading Bottle A, Jarman B, Aylin P. Hospital Standardised Mortality Ratios: sensitivity analyses on the impact of coding. Health Serv Res 2011; 46(6): Bottle A, Jarman B, Aylin P. Hospital Standardised Mortality Ratios: Strengths and Weaknesses. BMJ 2011; 342: c7116 Campbell MJ, Jacques RM, Fotheringham J, Maheswaran R, Nicholl J. Developing a summary hospital mortality index:retrospective analysis in English hospitals over five years. BMJ 2012; 344: e1001