Measuring and Reporting Patients’ Experiences with Their Doctors Process, Politics and Public Reports in Massachusetts Melinda Karp MHQP Director of Programs June 26, 2006
Today’s Objectives Provide brief background on MHQP as important context for measurement and reporting efforts Describe evolution of MHQP agenda for measuring and reporting patient experiences— key methods questions in moving from research to large scale implementation Describe stakeholder perspectives and decision points around key reporting issues
Stakeholders at the MHQP “Table” Provider Organizations –MA Hospital Association –MA Medical Society –2 MHQP Physician Council representatives Government Agencies –MA EOHHS –CMS Region 1 Employers –Analogue Devices Health Plans –Blue Cross Blue Shield of Massachusetts –Fallon Community Health Plan –Harvard Pilgrim Health Care –Health New England –Tufts Health Plan Consumers –Exec. Director Health Care for All –Exec. Director NE Serve Academic –Harris Berman, MD, Board Chair
Demonstration project in partnership with The Health Institute (Funded by Commonwealth and RWJF) Development of viable business model for implementing statewide patient experience survey Fielding and reporting of statewide survey The Evolution of MHQP’s Patient Experience Measurement Agenda
“1 st Generation” Questions: Moving MD-Level Measurement into Practice Is there enough performance variability to justify measurement? What sample size is needed for highly reliable estimate of patients’ experiences with a physician? How much of the measurement variance is accounted for by physicians as opposed to other elements of the system (practice site, network organization, plan)? What is the risk of misclassification under varying reporting frameworks?
Sample Size Requirements for Varying Physician-Level Reliability Thresholds
Allocation of Explainable Variance: Doctor-Patient Interactions Communication Whole- person orientation Health promotion Interpersonal treatment Patient trust Source: Safran et al. JGIM 2006.
Allocation of Explainable Variance: Organizational/Structural Features of Care Source: Safran et al. JGIM 2006.
Source: Safran et al. JGIM 2006; 21: Risk of Misclassification
Source: Safran et al. JGIM 2006; 21: Risk of Misclassification
MHQP 2005 Statewide Survey Physician-level survey format Site-level sampling to support site-level reporting Estimated samples required to achieve > 0.70 site-level reliability
Site Reliability Chart: Integration Sample sizes needed to achieve site reliability for integration domain.
Setting the Stage for Public Reporting: Key Issues for Physicians What measures get reported How measures get reported
Percent of Sites with A-Level Reliability by Measure and Survey-Type Adult PCP % Pediatric % MD – Patient Interactions Communication9897 Knowledge of patient9186 Health Promotion4697 Integration of care7961 Organizational/Structural Features of Care Access99100 Visit-based continuity100 Office Staff9599 Clinical Team3786 Willingness To Recommend6259
½ ½ 15 th ptile50 th ptile85 th ptile ½ Framework for Public Reporting Integration of Care
Summary and Implications With sufficient sample sizes, data obtained using C/G CAHPS approach yields data with MD- and site-level reliability >0.70 For site-level reliability, number of MDs per site influences required sample sizes Risk of misclassification can be held to <5% with by –Limiting number of performance categories –Creating buffer (“zone of uncertainty”) around performance cutpoints Trade-offs are likely around data quality standards (e.g., acceptable “risk”) vs. data completeness
The Continuing Evolution… Engagement around QI activities –Participation in Commonwealth Fund grant to study highest performing practices –Grant proposal to Physician Foundation to develop and pilot integrated clinical-patient experience QI curriculum Determining achievable benchmarks Fielding of Specialist Care Survey in 2006/2007 Repeat Primary Care Survey in 2007
For more information … Melinda Karp, Director of Programs