D&I Session: Measurement CPCRN Spring Meeting May 24-25, 2016 Chicago, IL This presentation was supported by Cooperative Agreement Number INSERT YOUR CENTER'S.

Slides:



Advertisements
Similar presentations
Program Evaluation. Overview and Discussion of: Objectives of evaluation Process evaluation Outcome evaluation Indicators & Measures Small group discussions.
Advertisements

Strategies for Implementing Outcomes in Practice Carolyn Baum, PhD, OTR, FAOTA.
Lynne Nemeth, PhD, RN, FAAN.  Define the concepts of a virtual learning collaborative and community of practice  Review previous PPRNet experience with.
Measuring Ethical Goals of Research Oversight Holly Taylor, PhD, MPH Department of Health Policy and Management Bloomberg School of Public Health Berman.
The Greater Cleveland Cancer Prevention Research Collaborating Center Sue Flocke, PhD Case Western Reserve University October 29, 2014 This presentation.
Building on the Measurement of CFIR Constructs in FQHCs: Where Do We Go From Here? Maria Fernandez, PhD on behalf of the CPCRN FQHC WG Investigators CPCRN.
Family Resource Center Association January 2015 Quarterly Meeting.
Why don’t innovation models help with informatics implementations? Rod Ward University of the West of England Medinfo 2010.
Characteristics of Sound Tests
NANDA International Investigating the Diagnostic Language of Nursing Practice.
STUDY PLANNING & DESIGN TO ENHANCE TRANSLATION OF HEALTH BEHAVIOR RESEARCH Lisa Klesges, Russell Glasgow, Paul Estabrooks, David Dzewaltowski, Sheana Bull.
Selecting an Evidence-based Approach (EBA) with the Best Fit Image courtesy of Naypong at FreeDigitalPhotos.net.
Designing Survey Instrument to Evaluate Implementation of Complex Health Interventions: Lessons Learned Eunice Chong Adrienne Alayli-Goebbels Lori Webel-Edgar.
Presenter-Dr. L.Karthiyayini Moderator- Dr. Abhishek Raut
A Comprehensive Review of Dissemination and Implementation Science Instruments Cara C. Lewis, Cameo Borntrager, Ruben Martinez, Phil Fizur, & Kate Comtois.
Organizational culture Factor analysis of a Competing Values Framework instrument Christian D. Helfrich, MPH, PhD Implementation Research Coordinator Ischemic.
Improving Implementation Research Methods for Behavioral and Social Science Working Meeting Measuring Enactment of Innovations and the Factors that Affect.
CFIR Implementation Framework with Application to the VISN 11 Stroke Collaborative Laura J. Damschroder, MS, MPH Diabetes QUERI Co-IRC Ann Arbor Center.
Research Utilization in Nursing Chapter 21
Cancer Prevention and Control Research Network A national network of academic, public health, and community partners who work together to reduce the burden.
Health Promotion as a Quality issue
Module 2: Quality and Quality Measures The degree to which health services for individuals and populations increase the likelihood of desired health outcomes.
Addressing Maternal Depression Healthy Start Interconception Care Learning Collaborative Kimberly Deavers, MPH U.S. Department of Health & Human Services.
Copyright 2012 Delmar, a part of Cengage Learning. All Rights Reserved. Chapter 9 Improving Quality in Health Care Organizations.
Notes by Ben Boerkoel, Kent ISD, based on a training by Beth Steenwyk.
11 The CPCRN, DCPC, NCI, and the Community Guide: Areas for Collaboration and Supportive Work Shawna L. Mercer, MSc, PhD Director The Guide to Community.
This presentation was supported by Cooperative Agreement Numbers U48-DP001909, U48-DP001946, U48-DP001924, U48-DP001934, U48-DP001938(03), U48-DP001944,
INTRODUCING THE PSBA-GTO ACT FOR YOUTH CENTER OF EXCELLENCE IN CONSULTATION WITH HEALTHY TEEN NETWORK Planning for Evidence-Based Programming.
Implementation Science: Finding Common Ground and Perspectives Laura Reichenbach, Evidence Project, Population Council International Conference on Family.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Dr. Aidah Abu Elsoud Alkaissi An-Najah National University Employ evidence-based practice: key elements.
Instrument Development and Psychometric Evaluation: Scientific Standards May 2012 Dynamic Tools to Measure Health Outcomes from the Patient Perspective.
T Relationships do matter: Understanding how nurse-physician relationships can impact patient care outcomes Sandra L. Siedlecki PhD RN CNS.
FAIMER Assessing Teaching Excellence John Norcini, Ph.D.
SAM (Self-Assessment of MTSS Implementation) ADMINISTRATION TRAINING
Overview of Intervention Mapping
School Climate Transformation Grants SEA Session October
Dissemination and Implementation Research
Community Implementation Workgroup
Incorporating Evaluation into a Clinical Project
Implementation Science: Theories & Models
Proctor’s Implementation Outcomes
Compilation of Slides for Data Measures
MATERI #6 Proses Perancangan Intervensi
Framework for Getting Results at Scale
Scottish Improvement Skills
Patient Centered Medical Home
Design and Critique of Grants for Implementation Research
MUHC Innovation Model.
Accreditation Canada Medicine Accreditation 2016.
Please sit at the appropriate table with your IC/Principal.
Testing the Getting To Outcomes implementation support strategy to facilitate the use of an evidence based practice in VA homeless programs Matthew Chinman,
Identifying Necessary and Sufficient Conditions for Sustainment of Evidence-Based Substance Abuse and Mental Health Programs Sapna J. Mendon, Lawrence.
Florida’s MTSS Project: Self-Assessment of MTSS (SAM)
Development of a system for measuring sustainment of prevention programs and initiatives Lawrence A. Palinkas, Suzanne Spear, Sapna Mendon, Juan Villamar,
Laura J. Damschroder, VA Ann Arbor Healthcare System
Daniela B. Friedman, University of South Carolina
Capacity Building Training and Technical Assistance Workgroup
Models, Theories and Frameworks
Evaluating and Institutionalizing OD Interventions
Performance Improvement Projects: PIP Library
School of Dentistry Education Research Fund (SDERF)
Measuring perceptions of safety climate in primary care
Interprofessional Education
ImpleMentAll Midterm Workshop
Deconstructing Standard 2a Dr. Julie Reffel Valdosta State University
Regulatory Perspective of the Use of EHRs in RCTs
Some Further Considerations in Combining Single Case and Group Designs
Changing the System: Do we have what it takes?
Sustainability and scalability of the pilot actions: five key messages
Presentation transcript:

D&I Session: Measurement CPCRN Spring Meeting May 24-25, 2016 Chicago, IL This presentation was supported by Cooperative Agreement Number INSERT YOUR CENTER'S NUMBER HERE from the Centers for Disease Control and Prevention. The findings and conclusions in this presentation are those of the author(s) and do not necessarily represent the official position of the Centers for Disease Control and Prevention.

What constructs (and measures) are relevant for D&I research?

Consolidated Framework for Implementation Research Damschroder et al., 2009

CFIR Constructs Characteristics of Individuals: knowledge and beliefs about intervention, individual stage of change, individual identification with the organization, self-efficacy, other personal attributes Intervention Characteristics: adaptability, complexity, cost, design quality and packaging, evidence strength and quality, intervention source, relative advantage, trialability Inner Setting: culture, implementation climate, networks and communications, readiness for implementation, structural characteristics Outer Setting: cosmopolitanism, external policy and incentives, patient needs and resources, peer pressure Process: planning, engaging, executing, reflecting and evaluating

Implementation Outcomes Framework Adapted from Proctor et al., 2011

Service Outcomes Efficiency (#served/resources) Safety (#adverse outcomes/pop served) Effectiveness (#positive outcomes/pop served) Equity (variation in access or quality/pop served) Patient-centeredness (patient reported outcomes) Timeliness (time to service or follow-up) Adapted from Chambers, TIDIRH 2015

Implementation Outcomes Acceptability: perception that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory. Adoption: intention, initial decision, or action to try or employ an innovation or evidence-based practice. Appropriateness: perceived fit, relevance, or compatibility of an EBP for a given setting, provider, consumer, issue, or problem. Cost: cost impact of an implementation effort. Feasibility: extent to which an EBP could be successfully used or carried out within a given setting. Fidelity: degree to which an EBP was implemented as prescribed in the original protocol or as intended by EBP developers Penetration: degree of integration of an EPB within a setting Sustainability: extent to which implemented EBP is maintained within a service setting’s ongoing, stable operations Adapted from Proctor et al., 2011

RE-AIM, a brief mention Great framework for evaluating the public health impact of an EBP Reach Effectiveness Adoption Implementation Maintenance

Looking for a Few Good Measures

Instrumentation Issues Non-use of theories and frameworks Minimal psychometric testing and reporting Frequent use of home-grown, use-once measures Over-reliance on self-report, common methods Lack of attention to practicality Need for a searchable instrument repository 10

Advancing IS through Measure Development & Evaluation Aim 1:Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing pragmatic strength Aim 2:Develop reliable, valid, pragmatic measures of three critical implementation outcomes Aim 3:Identify CFIR and IOF-linked measures that demonstrate psychometric and pragmatic strength CFIR: Consolidated Framework for Implementation Research (Damschroder, et al 2009) OIF: Implementation Outcomes Framework (Proctor et al 2011) Aim 2 measures Acceptability, appropriateness, feasibility PI: Cara Lewis, Ph.D.

Systematic Measure Development & Testing Process Development Process Domain delineation Item generation Substantive validity Structural validity Reliability (incl. test-retest) Predictive validity Additional Assessment Discriminant validity Known-groups validity Structural invariance Sensitivity to change Other pragmatic features 12

Measurement Work in FQHC-3 Selected and adapted measures for 16 of 39 CFIR constructs –Relative advantage, complexity, compatibility –External policies and incentives, patient needs and resources –Culture, implementation climate, learning climate, readiness for implementation, and available resources –Knowledge and beliefs, openness to innovation –Engaging champions, executing, reflecting & evaluating, goals & feedback Tried to keep the measures brief (practical) Developed 2 web-based surveys—clinic and individual—to address the level of measurement issue For clinics using multiple evidence-based CRC screening interventions, populated intervention-specific items that prioritized provider prompts first followed by client reminders Received surveyed from 277 clinic staff from 59 clinics

Measurement Issues in FQHC-3 No measures available Available measures have unknown psychometric qualities New measures created or adapted for CPCRN use General versus specific item referents Personal versus collective item referent Level of measurement issues Response scaling incompatibility Targeting of appropriate survey respondent(s) Balancing psychometric and pragmatic qualities (e.g., length)

Discussion What measurement issues are coming up in your Signature Project or Workgroup?

Resources

Grid-Enabled Measures (GEM) (GEM Homepage) (IS Team Website)

GEM-D&I Adapted from Chambers, TIDIRH 2015

GEM-D&I Adapted from Chambers, TIDIRH 2015

SIRC Instrument Review Project (IRP) Step 1Step 3 Consensus Battery Step 2 Evidence Based Assessment Criteria Systematic Review Online Instrument Repository for SIRC Members

SIRC IRP: Evidence Based Assessment Criteria 2 = Adequate 3 = Good 4 = Excellent 1 = Minimal 0 = None Reliability ✦ Internal Consistency Validity ✦ Structural ✦ Predictive Sensitivity ✦ Responsiveness Practicality ✦ Usability ✦ Norms ✦ Based on: Hunsley & Mash (2008) and Terwee et al (2012) ✦ Vetted by 58 SIRC SPG members, reviewed by 2 test developers, and piloted by Bryan Weiner’s team at UNC ✦ Preliminary Evidence demonstrates strong reliability ✦ Total possible score is 24

/ SIRC IRP

/

Literature Cited