Evaluability Assessments: Achieving Better Evaluations, Building Stronger Programs Nicola Dawkins, PhD, MPH ICF Macro
Project Team The findings and conclusions presented are those of the authors and do not necessarily represent the official position of the agencies. Robert Wood Johnson Foundation Laura Leviton, PhD Centers for Disease Control and Prevention DNPAO - Laura Kettel Khan, PhD DASH – Leah Robin, PhD and Seraphine Pitt Barnes, PhD, MPH, CHES DACH/PRC – Jo Anne Grunbaum, EdD Centers for Disease Control and Prevention Foundation Danielle Jackson, MPH, John Moore, PhD, RN, and Holly Wethington, PhD Macro International Inc. David Cotton, PhD, MPH, Nicola Dawkins, PhD, MPH, Karen Cheung, MPH, Mary Ann Hall, MPH, Thearis Osuji, MPH, and Starr Rice, BA
Will Discuss Today Introduction to EA, compare with full evaluation Purpose of Early Assessment project One unique process for using multiple EA method Steps in project Results Insights and conclusions
Evaluability Assessment Assesses: 1.Underlying program logic 2.Current state of program implementation 3.Feasibility of conducting rigorous outcomes-focused evaluation or other sorts of evaluation
Assist in improvement of program design, implementation, and evaluation characteristics Is intervention promising? Does intervention have program design integrity and realistic, achievable goals? Is intervention implemented as intended and at an appropriate developmental level? To answer questions: (1) Is there a feasible design? (2) Are data available or feasible to collect? Evaluable Intervention Yes No
CDC Framework for Program Evaluation Ensure use and share lessons learned Engage stakeholders Describe the program Focus the evaluation design Justify conclusions Steps Gather credible evidence
Evaluability Steps Compared to CDC’s Evaluation Framework Justify conclusions Agree on intended uses Engage stakeholdersInvolve stakeholders and intended users Describe the program Clarify program intent Determine program implementation Focus the evaluation design Work with stakeholders to prioritize key evaluation questions Gather credible evidence Explore designs and measurements Ensure use and share lessons learned CDC FrameworkEvaluability Steps
Multiple EA Example Convene a panel of experts to identify and review potential environmental programs and policies Assess environmental programs and policies’ readiness for evaluation Synthesize findings and share promising practices with the field Develop a network of public health and evaluation professionals with the skills to conduct evaluability assessments
Unique Systematic Screening and Assessment (SSA) Method Inputs Steps Products Guidance 1. CHOOSE priorities 2. SCAN environmental interventions 3. REVIEW AND IDENTIFY INTERVENTIONS that warrant evaluability assessment 4. EVALUABILITY ASSESSMENTS of priority interventions 5. REVIEW AND RATE interventions for promise/ readiness for evaluation 6. USE information 7. SYNTHESIZE what is known Expert review panel Focus Brief descriptions Report on each intervention Ratings and reports Constructive feedback Plan for rigorous evaluation List of interventions Report of intervention and evaluation issues Distributed network of practitioners/researchers Nominations, existing inventories, descriptions Communicate with all stakeholders Expert review panel
Systematic Process Nominations Received Met Inclusion Criteria After School/ Daycare 8134 Food Access5523 School District Local Wellness Policies 14658
Systematic Process Cont’d Expert panel selected 26 using criteria: – Potential impact – Innovativeness – Reach – Acceptability to stakeholders – Feasibility of implementation – Feasibility of adoption – Sustainability – Generalizability/transportability – Staff/organization capacity for evaluation
Selected Programs and Policies (Year 1) 7 After School/3 Daycare Programs – 5 programs: PA time, nutritious snacks – 4 programs: PA time, nutrition education – 1 policy: PA, nutrition, TV screen time 10 Food Access Programs – 5 farmers’ markets – 3 supermarket or corner store programs – 2 restaurant programs 6 School District Local Wellness Policies – All selected addressed PA and nutrition
Evaluability Assessment Review of documents – Draft logic model 2-3 day site visit – Interviews: program description, logic model, staffing, funding, sustainability, evaluation activities – Observations – TA /debriefing session Reports and recommendations Follow-up TA call with CDC experts
Readiness for Evaluation Review of site visit reports identified classifications: 1. Ready for stand-alone, outcome evaluation 2. Appropriate for cluster evaluation 3. Theoretically sound but need further development 4. Technical assistance needed in specific areas
Results for Year 1 Expert panel determined: – 14 ready for stand-alone, outcome evaluation – 2 best suited for cluster evaluation – 3 theoretically sound but need further development – 6 need TA in specific areas
Results for Year 1, Cont’d Dissemination of results from Year 1 Full evaluation planned for New York City Daycare Policy
Discovering Practice Based Evidence SSA Method builds evidence base through practice based evidence Year 1: 26 EAs 282 nominations 9 high potential impact, ready for evaluation
Year 2 Year 2 completed EAs of 27 initiatives Nominations Received Met Inclusion Criteria Selected After School/ Daycare Food Access29118 Comprehensive School PA 3972 Built Environment for PA 22144
Discovering Practice Based Evidence SSA Method builds evidence base through practice based evidence Year 2: 27 EAs 176 nominations 11 high potential impact, ready for evaluation
Key Lessons Learned Use an expert panel for diverse perspective Solicit broadly to maximize return Include programs/policies beyond start up phase to ensure implementation Centralize oversight for methodological integrity Provide technical assistance as an incentive to sites
Recap: It’s a Process 1. Choose priorities for the scan 2. Scan environmental programs & policies 3. Review and identify those that warrant evaluability assessment 4. Evaluability assessment of programs & policies 5. Review and rate for promise and readiness for evaluation 6.Use Information: Position for rigorous evaluation Feedback to innovators Cross-site synthesis 6.Use Information: Position for rigorous evaluation Feedback to innovators Cross-site synthesis
Overview of General EA vs SSA Method What is different? – EA as one component of a process of discovery – SSA Method explicitly provides feedback to innovators – SSA Method provided insights on clusters of projects – SSA Method helped identify policies and programs worthy of further attention What is the same? – Review documents – Discuss with stakeholders – Develop logic model – Iterate the process – Determine what can be evaluated
Of 458 innovations nominated in both years: – 174 met criteria for inclusion; 53 were selected for evaluability assessments; – 20 were of high potential impact and ready for stand alone evaluation. – Yet all of the nominations were viewed as important by stakeholders. – If all of them underwent evaluation, would be a 4% chance of encountering something with likelihood of concluding success! The Cost-Savings Factor
Conclusion 1 Without a systematic process, one would need to conduct at least 20 evaluations to discover 1 that might be successful. The process is cost-effective for funders and decision makers. It reduces uncertainty about evaluation investments.
Conclusion 2 Innovators found the process very helpful. Evaluability assessment plays a program development role.
Conclusion 3 Themes and issues emerged for clusters of policies and programs. Evaluability assessments can be configured to cast new light on – developments in the field – families or clusters of policies and programs
Impact on the Field of Prevention “Translating practice into evidence” A new method of topic selection and program identification Researchers very engaged by learning about practice Stimulated discussion of new research agendas
Nicola Dawkins