D&I Session: Measurement CPCRN Spring Meeting May 24-25, 2016 Chicago, IL This presentation was supported by Cooperative Agreement Number INSERT YOUR CENTER'S NUMBER HERE from the Centers for Disease Control and Prevention. The findings and conclusions in this presentation are those of the author(s) and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
What constructs (and measures) are relevant for D&I research?
Consolidated Framework for Implementation Research Damschroder et al., 2009
CFIR Constructs Characteristics of Individuals: knowledge and beliefs about intervention, individual stage of change, individual identification with the organization, self-efficacy, other personal attributes Intervention Characteristics: adaptability, complexity, cost, design quality and packaging, evidence strength and quality, intervention source, relative advantage, trialability Inner Setting: culture, implementation climate, networks and communications, readiness for implementation, structural characteristics Outer Setting: cosmopolitanism, external policy and incentives, patient needs and resources, peer pressure Process: planning, engaging, executing, reflecting and evaluating
Implementation Outcomes Framework Adapted from Proctor et al., 2011
Service Outcomes Efficiency (#served/resources) Safety (#adverse outcomes/pop served) Effectiveness (#positive outcomes/pop served) Equity (variation in access or quality/pop served) Patient-centeredness (patient reported outcomes) Timeliness (time to service or follow-up) Adapted from Chambers, TIDIRH 2015
Implementation Outcomes Acceptability: perception that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory. Adoption: intention, initial decision, or action to try or employ an innovation or evidence-based practice. Appropriateness: perceived fit, relevance, or compatibility of an EBP for a given setting, provider, consumer, issue, or problem. Cost: cost impact of an implementation effort. Feasibility: extent to which an EBP could be successfully used or carried out within a given setting. Fidelity: degree to which an EBP was implemented as prescribed in the original protocol or as intended by EBP developers Penetration: degree of integration of an EPB within a setting Sustainability: extent to which implemented EBP is maintained within a service setting’s ongoing, stable operations Adapted from Proctor et al., 2011
RE-AIM, a brief mention Great framework for evaluating the public health impact of an EBP Reach Effectiveness Adoption Implementation Maintenance
Looking for a Few Good Measures
Instrumentation Issues Non-use of theories and frameworks Minimal psychometric testing and reporting Frequent use of home-grown, use-once measures Over-reliance on self-report, common methods Lack of attention to practicality Need for a searchable instrument repository 10
Advancing IS through Measure Development & Evaluation Aim 1:Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing pragmatic strength Aim 2:Develop reliable, valid, pragmatic measures of three critical implementation outcomes Aim 3:Identify CFIR and IOF-linked measures that demonstrate psychometric and pragmatic strength CFIR: Consolidated Framework for Implementation Research (Damschroder, et al 2009) OIF: Implementation Outcomes Framework (Proctor et al 2011) Aim 2 measures Acceptability, appropriateness, feasibility PI: Cara Lewis, Ph.D.
Systematic Measure Development & Testing Process Development Process Domain delineation Item generation Substantive validity Structural validity Reliability (incl. test-retest) Predictive validity Additional Assessment Discriminant validity Known-groups validity Structural invariance Sensitivity to change Other pragmatic features 12
Measurement Work in FQHC-3 Selected and adapted measures for 16 of 39 CFIR constructs –Relative advantage, complexity, compatibility –External policies and incentives, patient needs and resources –Culture, implementation climate, learning climate, readiness for implementation, and available resources –Knowledge and beliefs, openness to innovation –Engaging champions, executing, reflecting & evaluating, goals & feedback Tried to keep the measures brief (practical) Developed 2 web-based surveys—clinic and individual—to address the level of measurement issue For clinics using multiple evidence-based CRC screening interventions, populated intervention-specific items that prioritized provider prompts first followed by client reminders Received surveyed from 277 clinic staff from 59 clinics
Measurement Issues in FQHC-3 No measures available Available measures have unknown psychometric qualities New measures created or adapted for CPCRN use General versus specific item referents Personal versus collective item referent Level of measurement issues Response scaling incompatibility Targeting of appropriate survey respondent(s) Balancing psychometric and pragmatic qualities (e.g., length)
Discussion What measurement issues are coming up in your Signature Project or Workgroup?
Resources
Grid-Enabled Measures (GEM) (GEM Homepage) (IS Team Website)
GEM-D&I Adapted from Chambers, TIDIRH 2015
GEM-D&I Adapted from Chambers, TIDIRH 2015
SIRC Instrument Review Project (IRP) Step 1Step 3 Consensus Battery Step 2 Evidence Based Assessment Criteria Systematic Review Online Instrument Repository for SIRC Members
SIRC IRP: Evidence Based Assessment Criteria 2 = Adequate 3 = Good 4 = Excellent 1 = Minimal 0 = None Reliability ✦ Internal Consistency Validity ✦ Structural ✦ Predictive Sensitivity ✦ Responsiveness Practicality ✦ Usability ✦ Norms ✦ Based on: Hunsley & Mash (2008) and Terwee et al (2012) ✦ Vetted by 58 SIRC SPG members, reviewed by 2 test developers, and piloted by Bryan Weiner’s team at UNC ✦ Preliminary Evidence demonstrates strong reliability ✦ Total possible score is 24
/ SIRC IRP
/
Literature Cited