Presentation is loading. Please wait.

Presentation is loading. Please wait.

Selecting measures for purchasing: Quality measurement Patrick S. Romano, MD MPH UC Davis School of Medicine Washington State Conference on Quality-based.

Similar presentations


Presentation on theme: "Selecting measures for purchasing: Quality measurement Patrick S. Romano, MD MPH UC Davis School of Medicine Washington State Conference on Quality-based."— Presentation transcript:

1 Selecting measures for purchasing: Quality measurement Patrick S. Romano, MD MPH UC Davis School of Medicine Washington State Conference on Quality-based Purchasing December 4, 2006

2 2 Romano, 12/4/2006Overview  Types of quality indicators  Strengths and limitations of different types of quality indicators  Potential evaluation criteria  Examples from the field  Consider unintended consequences  Conclusions and recommendations

3 3 Romano, 12/4/2006 Definitions of quality  Donabedian (1980): “The quality of medical care (is)…the management that is expected to achieve the best balance of health benefits and risks…(taking) into account the patient’s wishes, expectations, valuations, and means…(and) the distribution of that benefit within the population.”  Institute of Medicine (1990): “Quality of care is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”  Brook and McGlynn (1991): “High quality care…produces positive changes, or slows the decline, in health; low quality care fails to prevent or actually accelerates a decline in a person’s health.”

4 4 Romano, 12/4/2006 Types of quality measures Donabedian, 2003  Structure: the conditions under which care is provided Material resources (facilities, equipment) Material resources (facilities, equipment) Human resources (ratios, qualifications, experience) Human resources (ratios, qualifications, experience) Organizational characteristics (size, volume, systems) Organizational characteristics (size, volume, systems)  Process: the activities that constitute health care (adherence to guidelines) Screening and diagnosis Screening and diagnosis Treatment and rehabilitation Treatment and rehabilitation Education and prevention Education and prevention  Outcome: changes attributable to health care Mortality, morbidity (complications, readmissions), functional status Mortality, morbidity (complications, readmissions), functional status Knowledge, attitudes, and behaviors Knowledge, attitudes, and behaviors Satisfaction (including patient experiences) Satisfaction (including patient experiences)

5 5 Romano, 12/4/2006 Structural measures for QBP: Background and Problems  Structural measures are enabling factors that make it easier (or harder) for professionals to provide high-quality care  Usually explain little of the observed variability in processes and outcomes  Few randomized trials, so causal relationships are often unclear. Do better structures lead to better processes, or do better processes create demand for different structures (e.g., selective referral, CPOE)?  Often easy to measure, but hard to modify and even harder to evaluate. Few randomized intervention studies.

6 6 Romano, 12/4/2006 Structural measures for QBP: Implications  Structural indicators should be viewed as markers or facilitators of quality rather than as true measures  QBP programs have relied on structural indicators when acceptable process or outcome measures were not yet available (transitional practice to avoid “free ride”)  Focus on structural indicators that are modifiable (e.g., accreditation, training of key physicians)  Avoid structural indicators for which hasty implementation may lead to worse outcomes (CPOE)  Use non-modifiable measures only if you are willing to close down organizations that cannot change

7 Copyright ©2005 American Academy of Pediatrics Han, Y. Y. et al. Pediatrics 2005;116:1506-1512 Fig 1. Observed mortality rates (presented as a normalized % of predicted mortality) during the 18-month study period are plotted according to quarter of year

8 8 Romano, 12/4/2006 “Role of computerized physician order entry systems in facilitating medication errors”

9 9 Romano, 12/4/2006 Example error types Entering order for wrong patient due to interruption Entering order for wrong patient due to interruption Delays in orders when patients not yet entered into system Delays in orders when patients not yet entered into system – One fatal example reported in previous JAMA piece Incorrect default dosing or protocol Incorrect default dosing or protocol Overloading users with alerts and reminders for completeness Overloading users with alerts and reminders for completeness – Ignoring/over-riding all alerts and requests Medications discontinued without clinicians being aware Medications discontinued without clinicians being aware Koppel et al. Role of CPOE in facilitating medication errors. JAMA 2005. Ash J et al. Unintended Consequences of IT in Health Care J Am Med Inform Assoc 2004

10 10 Romano, 12/4/2006 Process measures for QBP: Background  Process measures are directly actionable by health care providers (“opportunities for intervention”)  Process measures are highly responsive to change  Process measures have generally been tested and validated (or could be validated) in randomized controlled trials  Process measures provide the pathways by which QBP leads to improved patient outcomes

11 11 Romano, 12/4/2006 Process measures for QBP: Problems  Often costly or difficult to collect Pharmacy/laboratory utilization (complete data capture?) Pharmacy/laboratory utilization (complete data capture?) Chart review (information bias?) Chart review (information bias?) Patient surveys (recall bias?) Patient surveys (recall bias?) Participant observation (Hawthorne effect?) Participant observation (Hawthorne effect?) Provider surveys/vignettes (social desirability bias?) Provider surveys/vignettes (social desirability bias?) Simulated patients (reliability?) Simulated patients (reliability?)

12 12 Romano, 12/4/2006 Process measures for QBP: Problems  Implicit process measures often lack reliability Multiple peer reviewers are required (at least 5?) Multiple peer reviewers are required (at least 5?) Unblinded reviewers are biased by adverse outcomes Unblinded reviewers are biased by adverse outcomes May not be actionable (too global) May not be actionable (too global) Evaluation criteria may be context-specific Evaluation criteria may be context-specific  Explicit process measures often lack validity Are they really evidence-based (vs. “expert opinion”)? Are they really evidence-based (vs. “expert opinion”)? Some processes that seem important may NOT be Some processes that seem important may NOT be Many important processes have not yet been recognized Many important processes have not yet been recognized Measures may not generalize across settings Measures may not generalize across settings

13 13 Romano, 12/4/2006 Process measures for QBP: Implications Process measures rely on exclusions instead of risk- adjustment, so exclusions should be clearly defined Process measures rely on exclusions instead of risk- adjustment, so exclusions should be clearly defined – “Patients with moderate-to-severe asthma should not receive beta- blocker…” (but how is that defined?) The validity of process measures depends on evidence, so focus on measures with a strong evidence base The validity of process measures depends on evidence, so focus on measures with a strong evidence base – “Proportion of eligible registry patients who have documentation…of a physician…statement regarding the patient’s symptoms that coincides with NAEPP terminology…” (who cares?) Focus on actual care rather than documentation of care, and establish systems for auditing data or ensuring data accuracy Focus on actual care rather than documentation of care, and establish systems for auditing data or ensuring data accuracy Process measures tend to be provider-centered, so consider including user-centered measures as well Process measures tend to be provider-centered, so consider including user-centered measures as well

14 14 Romano, 12/4/2006 Process measures for QBP: Promise and Potential  The cost of collecting process measures can be reduced with clinical information systems: Electronic medical records Electronic medical records Linked pharmacy and laboratory claims Linked pharmacy and laboratory claims  Patient surveys now reliably measure patient- centered processes of care, such as education, communication, and pain management.  The evidence-based medicine paradigm has led to greater reliance on RCTs and systematic reviews to identify useful processes of care

15 15 Romano, 12/4/2006 Outcome measures: Background  Outcomes are what really matter to patients, families, and communities  Outcomes are intrinsically meaningful and generally easy to understand  Outcomes reflect not just what was done but how well it was done (which is hard to measure directly)  Outcomes may be ascertainable using administrative data, if such data exist

16 16 Romano, 12/4/2006 Outcome measures: Problems  Data systems depend on reporting by provider organizations  Morbidity measures tend to be documented and reported inconsistently (poor physician documentation and/or coding)  Mortality measures may be confounded by variation in use of observation units, inter-hospital transfers, and LOS  Severity of illness varies widely across providers; most existing data systems capture little of this variation  Many adverse outcomes are rare or delayed (e.g., little short- term responsiveness to change, lots of random noise)  Are outcomes sufficiently under providers’ control?

17 17 Romano, 12/4/2006 Outcome measures: Promise and Potential  Internal and external (e.g., vital statistics) data linkages may minimize confounding due to variation in transfer rates and LOS.  Many states now capture data from emergency departments (EDs) and/or ambulatory surgery centers; readmissions can be identified using Medicare data or linked state data.  Some data sets (NY, CA, soon FL) distinguish comorbidities from complications, or add “clinical” data elements (e.g., “key clinical findings” in PA; DNR in CA and NJ).  Mail/telephone patient satisfaction surveys (CAHPS, H-CAHPS) have been developed and validated.  Some outcomes monitoring systems now have clear definitions, detailed guidance for data collectors, and external auditing.

18 18 Romano, 12/4/2006 Outcome measures for QBP: Implications Outcome measures rely on risk-adjustment, so methods should be open (not “black-box”) and validated Outcome measures rely on risk-adjustment, so methods should be open (not “black-box”) and validated The utility of outcome measures depends on the existence of treatments that work, so focus on measures with a strong evidence base from prior intervention studies The utility of outcome measures depends on the existence of treatments that work, so focus on measures with a strong evidence base from prior intervention studies Outcome measures are relatively easy to game (by not reporting complications or over-reporting comorbidities), so focus on “harder” outcomes and establish systems for auditing data or ensuring data accuracy Outcome measures are relatively easy to game (by not reporting complications or over-reporting comorbidities), so focus on “harder” outcomes and establish systems for auditing data or ensuring data accuracy

19 19 Romano, 12/4/2006 Consider processes and outcomes together  Integrating outcome and process measures provides a more complete assessment of quality and avoids perverse incentives  Agreement among process and outcome measures confirms the validity of each  Disagreement suggests bad data (information bias), unmeasured severity of illness (confounding bias), selection factors, or an incorrect conceptual model linking processes and outcomes

20 20 Romano, 12/4/2006 Selecting quality measures for QBP Evaluation criteria: NQF and others  Importance or relevance  Scientific acceptability or soundness  Usability  Feasibility Note that all of these criteria may depend on local circumstances and priorities…

21 NQFIOM/NHQRJCAHONCQA Importance/Relevance Leverage point for improving quality Impact on healthTargets improvement in the health of populations Strategically important Clinically important Meaningfulness to policymakers, consumers Meaningful to consumers, purchasers, plans, providers Performance in the area is suboptimal Potential for improvement Aspect of quality is under provider control.* Susceptibility to being influenced by health care Under provider controlControllable Considerable variation in quality of care exists Variance among plans/providers Financially important

22 22 Romano, 12/4/2006 Estimating the impact of implementing Leapfrog hospital volume standards (NIS) Birkmeyer et al., Surgery 2001;130:415-22 Volume indicator RR mortality LVH vs HVH Patients at LVHs in MSAs Potential lives saved by volume standards CABG1.38164,2611,486 Coronary angioplasty/PCI 1.33121,292345 AAA repair 1.6018,534464 Carotid endarterectomy 1.2882,544118 Esophagectomy3.011,696168

23 23 Romano, 12/4/2006 Leapfrog Hospital Rewards Program: Focused clinical areas chosen to maximize commercial employer impact 5 of the ten CFGs have NQF-approved measures collected by JCAHO 5 of the ten CFGs have NQF-approved measures collected by JCAHO Benchmarked against Medstat’s MarketScan, the 5 CFGs represent 33% of admissions and 20% of a commercial payer’s inpatient spending Benchmarked against Medstat’s MarketScan, the 5 CFGs represent 33% of admissions and 20% of a commercial payer’s inpatient spending Potential savings from reduced complication and re- admission rates

24 24 Romano, 12/4/2006 Estimating the impact of preventing each PSI event on mortality, LOS, charges (ROI) NIS 2000 analysis by Zhan & Miller, JAMA 2003;290:1868-74 Indicator Δ Mort (%) Δ LOS (d) Δ Charge ($) Postoperative septicemia 21.910.9$57,700 Postoperative thromboembolism 6.65.421,700 Postoperative respiratory failure 21.89.153,500 Postoperative physiologic or metabolic derangement 19.88.954,800 Decubitus ulcer 7.24.010,800 Selected infections due to medical care 4.39.638,700 Postoperative hip fracture 4.55.213,400 Accidental puncture or laceration 2.21.38,300 Iatrogenic pneumothorax 7.04.417,300 Postoperative hemorrhage/hematoma 3.03.921,400

25 25 Romano, 12/4/2006 Estimating the impact of preventing each PSI event on mortality, LOS, charges (ROI) NIS 2000 analysis by Zhan & Miller, JAMA 2003;290:1868-74 Indicator Δ Mort (%) Δ LOS (d) Δ Charge ($) Birth trauma -0.1 (NS) 300 (NS) Obstetric trauma –cesarean -0.0 (NS) 0.42,700 Obstetric trauma - vaginal w/out instrumentation 0.0 (NS) 0.05 -100 (NS) Obstetric trauma - vaginal w instrumentation 0.0 (NS) 0.07220 Postoperative abdominopelvic wound dehiscence 9.69.440,300 Transfusion reaction* -1.0 (NS) 3.4 (NS) 18,900 (NS) Complications of anesthesia* 0.2 (NS) 1,600 Foreign body left during procedure† 2.12.113,300 * All differences NS for transfusion reaction and complications of anesthesia in VA/PTF. † Mortality difference NS for foreign body in VA/PTF.

26 26 Romano, 12/4/2006 RAND QA Tools: A comprehensive assessment of quality N Engl J Med. 2003 Jun 26;348(26):2635-45 Selected 30 clinical areas representing about half of reasons people seek care Selected 30 clinical areas representing about half of reasons people seek care Developed specific standards or indicators within each clinical area based on literature reviews Developed specific standards or indicators within each clinical area based on literature reviews Convened 45 experts nominated by specialty societies to evaluate proposed standards Convened 45 experts nominated by specialty societies to evaluate proposed standards Sampled households from 12 metro areas around the nation Sampled households from 12 metro areas around the nation Conducted telephone interviews (demographics, health history, some process measures) Conducted telephone interviews (demographics, health history, some process measures) Obtained and abstracted medical records from all providers for the two years preceding the date of the telephone interview Obtained and abstracted medical records from all providers for the two years preceding the date of the telephone interview 79+45 measures translated to CPT/ICD-9-CM codes for use with billing data (Care Focused Purchasing initiative led by Mercer) 79+45 measures translated to CPT/ICD-9-CM codes for use with billing data (Care Focused Purchasing initiative led by Mercer)

27 27 Romano, 12/4/2006 Potential for improvement may vary across diseases and treatments

28 28 Romano, 12/4/2006 Jha, A. K. et al. N Engl J Med 2005;353:265-274 Potential for improvement may vary across regions and communities

29 29 Romano, 12/4/2006 Jha, A. K. et al. N Engl J Med 2005;353:265-274 Which JCAHO/CMS Core Measures had the greatest variation across hospitals (for Medicare patients admitted with AMI) in January-June 2004?

30 30 Romano, 12/4/2006 Jha, A. K. et al. N Engl J Med 2005;353:265-274 Which JCAHO/CMS Core Measures had the greatest variation across hospitals (for Medicare patients with CHF or pneumonia) in January-June 2004?

31 31 Romano, 12/4/2006 Williams, S. C. et al. N Engl J Med 2005;353:255-264 What is the potential “value-added” from using an existing indicator for QBP? Trends for AMI and pneumonia at US hospitals, 7/02-6/04

32 NQFAHRQ/NHQRJCAHONCQA Scientific Acceptability/Soundness Well-defined and precisely specified Precisely defined and specified Precisely specified (under “Feasibility”) ReliableReliability (“stable results”) Reliable (“identify consistently”) Reproducible Valid (“accurately representing the concept”) Validity (“measure what it is intended to measure”) Valid (“capture what it was intended to measure”) Valid (face, construct, content) Precise, adequate discrimination Accurate (“reasonable level of precision”) Adaptable to patient preferences and variety of settings Comparability of data sources Adequate, specified risk-adjustment Risk-adjusted or stratified (if needed) Risk-adjustable Evidence linking process measures to outcomes Explicitness of the evidence base Degree of professional agreement

33 33 Romano, 12/4/2006 Reliability of PSIs: hospital-level signal ratio Source: 2002 State Inpatient Data. Average signal ratio across hospitals after risk-adjustment (N=4,428)

34 34 Romano, 12/4/2006 Year-to-year correlation of hospital effects for PSIs Source: 2001-2002 State Inpatient Data, hospitals with at least 1,000 discharges (N=4,428). Risk-adjusted unsmoothed rates.

35 35 Romano, 12/4/2006 Face (consensual) validity of PSIs: Clinical panel review Modified RAND/UCLA Appropriateness Method Modified RAND/UCLA Appropriateness Method Physicians of various specialties/subspecialties, nurses, other specialized professionals (e.g., midwife, pharmacist) Physicians of various specialties/subspecialties, nurses, other specialized professionals (e.g., midwife, pharmacist) Potential indicators were rated by 8 multispecialty panels; surgical indicators were also rated by 3 surgical panels Potential indicators were rated by 8 multispecialty panels; surgical indicators were also rated by 3 surgical panels Pre-conference ratings, focused discussion, post-conference ratings Pre-conference ratings, focused discussion, post-conference ratings All panelists rated all assigned indicators (1-9) on: All panelists rated all assigned indicators (1-9) on: – Overall usefulness – Likelihood of identifying the occurrence of an adverse event or complication (i.e., not present at admission) – Likelihood of being preventable (i.e., not an expected result of underlying conditions) – Likelihood of being due to medical error or negligence (i.e., not just lack of ideal or perfect care) – Likelihood of being clearly charted – Extent to which indicator is subject to case mix bias

36 36 Romano, 12/4/2006 Expert panel ratings of PSI “preventability” a Panel ratings were based on definitions different than final definitions. For “Iatrogenic pneumothorax,” the rated denominator was restricted to patients receiving thoracentesis or central lines; the final definition expands the denominator to all patients (with same exclusions). For “In-hospital fracture” panelists rated the broader Experimental indicator, which was replaced in the Accepted set by “Postoperative hip fracture” due to operational concerns. b Vascular complications were rated as Unclear (-) by surgical panel; multispecialty panel rating is shown here.

37 37 Romano, 12/4/2006 International expert panel ratings of PSIs Organization for Economic Cooperation and Development

38 38 Romano, 12/4/2006 Can Solucient’s Expected Complication Rate Index be used for QBP? Criterion validity of Iezzoni’s Complications Screening Program Med Care 2000;38:785-806,868-76; Int J Qual Health Care 1999;11:107-18 CSP Indicator Coder (%): Complication Present 1 RN (%): Process problem Process problemidentified MD (%): Complicationpresent Qualityproblemconfirmed Postprocedural hemorrhage/hematoma 83 (surg) 49 (med) 66 vs 46 13 vs 5 57 (surg) 55 (med) 37 vs 2 31 vs 2 Postop pulmonary compromise 72 52 vs 46 75 20 vs 2 DVT/PE 59 (surg) 32 (med) 72 vs 46 69 vs 5 70 (surg) 28 (med) 50 vs 2 20 vs 2 In-hosp hip frx and falls 57 (surg) 11 (med) 76 vs 46 54 vs 5 71 (surg) 11 (med) 24 vs 2 5 vs 2 1 Contrast between cases flagged with this CSP indicator and cases unflagged by any CSP indicator.

39 39 Romano, 12/4/2006 Criterion validity in CA hospital discharge data varies with different definitions of obstetric complications Romano PS, et al. Obstet Gynecol 2005;106(4):717-725 IndicatorSensitivityPPV UnweightedWeightedUnweightedWeighted FORMER AHRQ PSI: Obstetric trauma, Cesarean delivery 11%5%67%94% HealthGrades: major comps, Vaginal delivery 67%58%91%91% HealthGrades: major comps, Cesarean delivery 55%47%64%79% AHRQ/JCAHO: 3 rd or 4 th degree laceration 90%93%90%73%

40 40 Romano, 12/4/2006 Construct validity based on literature review (MEDLINE/EMBASE) Approaches to assessing construct validity Approaches to assessing construct validity – Is the outcome indicator associated with explicit processes of care (e.g., appropriate use of medications)? – Is the outcome indicator associated with implicit process of care (e.g., global ratings of quality)? – Is the process indicator associated with a clinically meaningful outcome? – Is the outcome (process) indicator associated with nurse staffing or skill mix, physician skill mix, or other aspects of hospital structure?

41 Summary of published construct validity evidence for PSIs Indicator Explicit process Implicit process Staffing Complications of anesthesia Death in low mortality DRGs + Decubitus ulcer ± Failure to rescue ++ Foreign body left during procedure Iatrogenic pneumothorax Selected infections due to medical care Postop hip fracture ++ Postop hemorrhage or hematoma ±+ Postop physiologic/metabolic derangements –-–-–-–- Postop respiratory failure ±+± Postop thromboembolism ++± Postop sepsis –-–-–-–- Accidental puncture or laceration Transfusion reaction Postop abdominopelvic wound dehiscence Birth trauma Obstetric trauma – vaginal birth w instrumentation Obstetric trauma – vaginal w/out instrumentation Obstetric trauma – cesarean birth

42 42 Romano, 12/4/2006 Developing data on accuracy and relevance: AHRQ PSIs in Children’s Hospitals Sedman A, et al. Pediatrics 2005;115(1):135-145 PSI No. reviewed (total events) Preventable (PPV %) NonpreventableUnclear Complications of anesthesia74 (503)11 (15%)3725 Death in low-mortality DRG121 (1282)16 (13%)8916 Decubitus ulcer130 (2300)71 (55%)4710 Failure to rescue187 (5271)15 (8%)14811 Foreign body left in49 (235)25 (51%)1410 Postop hemorrhage or hematoma114 (1571)40 (35%)5123 Iatrogenic pneumothorax114 (1113)51 (45%)4221 Selected infection 2° to med care152 (7291)63 (41%)4539 Postop DVT/PE 126 (1956)36 (29%)6129 Postop wound dehiscence 41 (232)19 (46%)166 Accidental puncture or laceration 133 (4020)86 (65%)1926

43 43 Romano, 12/4/2006 Do mortality measures have adequate discrimination? Minimum hospital volume to detect mortality doubling (α=0.05, β=0.2) Dimick, et al. JAMA. 2004;292:847-851.

44 NQFIOM/NHQRJCAHONCQA Usability Can be used by at least one stakeholder audience for decision-making Useful to supplement or enhance the accreditation process Performance differences are statistically meaningful Performance differences are clinically meaningful Risk stratification or adjustment can be applied Capacity to support subgroup analyses (under “Feasibility”) Effective presentation and dissemination strategies exist Can be interpreted by data users Information about appropriate conditions is given Methods for aggregating measure are defined

45 NQFIOM/NHQRJCAHONCQA Feasibility Point of data collection tied to care delivery, when feasible Logistically feasible Timing and frequency of measure collection are specified Benefit of measurement is evaluated against financial and administrative burden Cost and burden of measurement Data collection effort is assessed (availability, accessibility, effort, cost) Reasonable cost Auditing strategy is designed and can be implemented Auditable Confidentiality concerns can be addressed Confidential Public availability (access to measure construct and calculation algorithm) Existence of prototypes (in use)

46

47 47 Romano, 12/4/2006 Examples from the field  For more information Go to the National Quality Measures Clearinghouse at http://www.qualitymeasures.ahrq.gov/ Go to the National Quality Measures Clearinghouse at http://www.qualitymeasures.ahrq.gov/  What indicators have P4P programs used so far? Medicaid managed care in Wisconsin Medicaid managed care in Wisconsin Premier Hospital Quality Incentive Demonstration Premier Hospital Quality Incentive Demonstration NCQA’s Bridges to Excellence NCQA’s Bridges to Excellence Integrated HealthCare Association (CA) Integrated HealthCare Association (CA) Care-focused Purchasing coalition Care-focused Purchasing coalition

48 48 Romano, 12/4/2006 Med-Vantage survey of P4P programs in 2003 and 2004

49 49 Romano, 12/4/2006 AHRQ Prevention Quality Indicators Highlighted measures recommended for state Medicaid programs by FACCT  Ambulatory care sensitive conditions (hospitalizations) Dehydration Dehydration Bacterial pneumonia Bacterial pneumonia Urinary tract infection Urinary tract infection Angina Angina Adult asthma/pediatric asthma Adult asthma/pediatric asthma Chronic obstructive pulmonary disease Chronic obstructive pulmonary disease Congestive heart failure Congestive heart failure Diabetes (short-term and long-term complications, uncontrolled) Diabetes (short-term and long-term complications, uncontrolled) Lower extremity amputation with diabetes Lower extremity amputation with diabetes Hypertension Hypertension Pediatric gastroenteritis Pediatric gastroenteritis  Other avoidable conditions Perforated appendix Perforated appendix Low birth weight Low birth weight

50 50 Romano, 12/4/2006 Evaluating Medicaid managed care programs in Wisconsin

51 51 Romano, 12/4/2006 Center for Medicare and Medicaid Services Premier Hospital Quality Incentive Demonstration  Premier Hospital Quality Incentive Demonstration 33 measures for pay-for-performance within Premier, Inc. 33 measures for pay-for-performance within Premier, Inc. Started with 27 NQF-endorsed measures, 4 PSI-based measures Started with 27 NQF-endorsed measures, 4 PSI-based measures Added CABG inpatient mortality and ASA at discharge, THA/TKA 30-day readmits; dropped use of IMA for CABG Added CABG inpatient mortality and ASA at discharge, THA/TKA 30-day readmits; dropped use of IMA for CABG 266 hospitals accepted invitation to participate 266 hospitals accepted invitation to participate Hospitals performing in top two deciles received modest bonus payments (2%/1%) in Year 2 Hospitals performing in top two deciles received modest bonus payments (2%/1%) in Year 2 Hospitals performing in bottom decile penalized in Year 3 Hospitals performing in bottom decile penalized in Year 3

52

53 53 Romano, 12/4/2006 NCQA’s Bridges to Excellence

54 54 Romano, 12/4/2006 CA’s Integrated Healthcare Association

55 55 Romano, 12/4/2006 CA’s Integrated Healthcare Association

56 Copyright restrictions may apply. Rosenthal, M. B. et al. JAMA 2005;294:1788-1793.

57 57 Romano, 12/4/2006 Care-focused Purchasing Mercer Human Resources Consulting Clinical Quality: Structure, process and outcome based measures of safety, effectiveness, timeliness, and equity Service Quality: (patient experience) Survey based measures of patient experience and equity, i.e. timeliness, courtesy, respect, education, treatment options and risks, follow-up. Efficiency: Risk-adjusted, longitudinal average and best practice total costs to achieve target levels of quality. Comparisons to among providers AND to other treatment options. Clinical Quality Service Quality Efficiency

58 58 Romano, 12/4/2006 Care-focused Purchasing Version 1.0 Hospital Efficiency: Risk adjusted “proxy cost” per admission for acute APR-DRGs Risk adjusted “proxy cost” per admission for acute APR-DRGs Hospital Quality: CMS “voluntary” measurements CMS “voluntary” measurements Available JCAHO core measures Available JCAHO core measures Leapfrog Group measures Leapfrog Group measures State-specific hospital performance reporting programs State-specific hospital performance reporting programs AHRQ QIs where “warning label” is removed AHRQ QIs where “warning label” is removed Medpar complication rates for 53 hospital service lines (CACR, CareScience) Medpar complication rates for 53 hospital service lines (CACR, CareScience) Physician Efficiency: Severity and risk-adjusted episode based resource consumption (Symmetry ETG) Severity and risk-adjusted episode based resource consumption (Symmetry ETG) Physician Quality: Compliance with evidence based guidelines (ActiveHealth Management, RAND, or Resolution Health Inc.) Compliance with evidence based guidelines (ActiveHealth Management, RAND, or Resolution Health Inc.) NCQA’s Physician Recognition Programs (PRP) in cardiovascular disease, diabetes, and office systemness NCQA’s Physician Recognition Programs (PRP) in cardiovascular disease, diabetes, and office systemness

59 59 Romano, 12/4/2006 Consider unintended consequences  Quality-based purchasing is a potentially powerful tool to stimulate behavior change among providers  You will get what you pay for – make sure that’s what you want!  Perverse incentives to improve “measures” without actually improving quality of care (e.g., survival with poor quality of life, survival to discharge with death a week later, selection of low- risk patients, avoidance of high-risk patients, switch high-risk cases to uncovered settings)  Perverse incentives to improve measured variables without improving unmeasured variables  “Free ride” versus “Sisyphus syndrome” – keep “raising the bar” but not too high too quickly

60 60 Romano, 12/4/2006 Early results of NHS reforms – Scotland % of maximum available points scored 0 10 2020 30 40 50 % of practices 05101520253035404550556065707580859095100 Total points scored

61 61 Romano, 12/4/2006 Measurement for quality-based purchasing: Conclusions and recommendations QBP aligns incentives so providers are motivated to do what’s right: improve quality, reduce disparities, improve IT, teamwork QBP aligns incentives so providers are motivated to do what’s right: improve quality, reduce disparities, improve IT, teamwork Select measures based on local priorities and available/obtainable surveillance data Select measures based on local priorities and available/obtainable surveillance data Consider your key audiences and objectives. How important is provider buy-in? How important is purchaser buy-in? Consider your key audiences and objectives. How important is provider buy-in? How important is purchaser buy-in? Consider private feedback before public reporting and QBP Consider private feedback before public reporting and QBP Define and collect measures in a manner that earns the confidence of key stakeholders (definitions manual, auditing, monitoring undesirable consequences, maximizing transparency) Define and collect measures in a manner that earns the confidence of key stakeholders (definitions manual, auditing, monitoring undesirable consequences, maximizing transparency) Outcome measures: consider stratification/risk-adjustment Outcome measures: consider stratification/risk-adjustment Process measures: consider eligibility criteria Process measures: consider eligibility criteria

62 62 Romano, 12/4/2006 Measurement for quality-based purchasing: Designing a measure set Select enough measures to represent multiple domains of care, but not so many that providers are overwhelmed Select enough measures to represent multiple domains of care, but not so many that providers are overwhelmed Weight measures according to importance – but think about how much effort will be required of providers Weight measures according to importance – but think about how much effort will be required of providers Think incrementally – start small (where you can get “most bang for the buck”), build up, improve data quality as you go Think incrementally – start small (where you can get “most bang for the buck”), build up, improve data quality as you go Don’t reinvent the wheel – use existing measures if possible, but be a pioneer if you need to Don’t reinvent the wheel – use existing measures if possible, but be a pioneer if you need to Involve multiple stakeholders, listen to everyone Involve multiple stakeholders, listen to everyone Use more measures, cross-cutting measures, pooled data for evaluation at physician/practice level (vs. group/plan level) Use more measures, cross-cutting measures, pooled data for evaluation at physician/practice level (vs. group/plan level)

63 “I think that I should warn you that the flip side of our generous bonus incentive scheme is capital punishment”


Download ppt "Selecting measures for purchasing: Quality measurement Patrick S. Romano, MD MPH UC Davis School of Medicine Washington State Conference on Quality-based."

Similar presentations


Ads by Google