Designing Evaluation for Complex Multiple-Component Programs

Slides:



Advertisements
Similar presentations
Creating National Performance Indicators that are Relevant to Stakeholders: Participatory Methods for Indicator Development, Selection, and Refinement.
Advertisements

Intelligence Step 5 - Capacity Analysis Capacity Analysis Without capacity, the most innovative and brilliant interventions will not be implemented, wont.
Donald T. Simeon Caribbean Health Research Council
Engaging Patients and Other Stakeholders in Clinical Research
MODULE 8: PROJECT TRACKING AND EVALUATION
What You Will Learn From These Sessions
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
1 Experiences of Using Performance Information in the Budget Process OECD 26 th March 2007 Teresa Curristine, Budgeting and Public Expenditures Division,
PPA 502 – Program Evaluation
Evaluation. Practical Evaluation Michael Quinn Patton.
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
Health Systems and the Cycle of Health System Reform
HOW TO WRITE A GOOD TERMS OF REFERENCE FOR FOR EVALUATION Programme Management Interest Group 19 October 2010 Pinky Mashigo.
© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 1 August 15th, 2012 BP & IA Team.
Evaluating the Effectiveness of the CTRC: Designing Quantitative and Qualitative Measures to Assess in Real Time the Value of the Center Mike Conlon, University.
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
Making Sense of the Social World 4 th Edition Chapter 11, Evaluation Research.
1 REVIMP From review to improvement Dr. Adrie Visscher Faculty of Behavioural Sciences The Netherlands.
The Major Steps of a Public Health Evaluation 1. Engage Stakeholders 2. Describe the program 3. Focus on the evaluation design 4. Gather credible evidence.
August 20, 2008 Clinical and Translational Science Awards (CTSA) CTSA Evaluation Approach Institute for Clinical & Translational Research (ICTR)
Project Evaluation for MSP Targeted Partnerships Eric R. Banilower Horizon Research, Inc.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
Implementation Science: Finding Common Ground and Perspectives Laura Reichenbach, Evidence Project, Population Council International Conference on Family.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Stages of Research and Development
New Survey Questionnaire Indicators in PISA and NAEP
Evaluating the Quality and Impact of Community Benefit Programs
Incorporating Evaluation into a Clinical Project
Chapter 1 Introduction to Cost Management
An Overview on Risk Management
Software Quality Control and Quality Assurance: Introduction
DATA COLLECTION METHODS IN NURSING RESEARCH
Designing Effective Evaluation Strategies for Outreach Programs
Patient Focused Drug Development An FDA Perspective
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
Enacting Multiple Strategies and Limiting Potential Successes: Reflections on Advocacy Evaluation, Competing Objectives, and Pathways to Policy Change.
Gender-Sensitive Monitoring and Evaluation
Gender-Sensitive Monitoring and Evaluation
Module 9 Designing and using EFGR-responsive evaluation indicators
How to Assess the Effectiveness of Your Language Access Program
September 10, 2017 Stewart Landers, Project Director
Fundamentals of Monitoring and Evaluation
WP1 – Smart City Energy Assessment and User Requirements
Long Term Impacts of Research Capacity Building in Global Health
Data Collection Methods
Programme Board 6th Meeting May 2017 Craig Larlee
TechStambha PMP Certification Training
Descriptive Analysis of Performance-Based Financing Education Project in Burundi Victoria Ryan World Bank Group May 16, 2017.
Social Distancing Decision Making Protocol
Outcome Harvesting nitty- gritty Promise and Pitfalls of Participatory Design Atlanta, 10:45–11:30 29 October, 2016.
© 2012 The McGraw-Hill Companies, Inc.
CLAS Grant Support Office Kristi Fitzpatrick, Director
Draft OECD Best Practices for Performance Budgeting
NIH Requirements for Training in Responsible Conduct of Research
School of Dentistry Education Research Fund (SDERF)
PROCESSES AND INDICATORS FOR MEASURING THE IMPACT OF EQUALITY BODIES
The control environment
IENE – INTERCULTURAL EDUCATION OF NURSES AND MEDICAL STAFF IN EUROPE
CHANGE IS INEVITABLE, PROGRESS IS A CHOICE
OECD good practices for setting up an RIA system Regional Capacity-Building Seminar on Regulatory Impact Assessment Istanbul, Turkey 20 November 2007.
Teaming and Collaboration
Group work on challenges Learning oriented M&E System
Finalization of the Action Plans and Development of Syllabus
Reading Paper discussion – Week 4
Effectiveness Working Group
Using Collaborative/Participatory Evaluation Methods to Give Voice to Stakeholders in Clinical and Translational Research AEA 2018 Chair: John F. Stevenson,
Dr. Phyllis Underwood REL Southeast
Stakeholder engagement and research utilization: Insights from Namibia
Socio-Economic Impact of ESS
Presentation transcript:

Designing Evaluation for Complex Multiple-Component Programs Genevieve deAlmeida-Morris National Institute on Drug Abuse National Institutes of Health Dept. of Health and Human Services

Designing Evaluation for Complex Multiple -component Research Programs Research programs are complex – Biomedical research programs impact health and lives for generations - Counting and valuing benefits from the programs is difficult - Programs have time-related effects Benefits from averted incidence of disease are not actual Biomedical research increasing costs industry with ec efficiency only from translation to treatment and prevention

Contextual Dimensions of NIH Programs Multiple components in a program - components working together, providing a service/product for use by other components conducting research - awards as a self-contained component with a set of functions –interdisciplinary research, clinical and translational research, community engagement - development of disciplines from these, research training - components at different stages in function, different progress rates

Contextual Dimensions Context from Administration of the Program - Scientist-managers administering the programs for research conduct and progress for compliance with regulations - With the legal function and authority that evaluation does not have

Contextual Dimensions Contexts from NIH Roadmap Initiative - funded from the common fund co-administration of a component, more than one scientist-manager lead, from different Institutes an over-arching workgroup participating in planning and decision-making, self-selected, from the Institutes - a set of Institute Directors - decisions go through multiple levels of review

Contextual Dimensions In addition - new projects are funded each year - a research component can be added to individual projects funded by an Institute(s), not by the common fund - the RM program must be integrated – integration of components integration of functions

The Stakeholder Context Evaluation Faces Evaluation faces a broader concept of ‘stakeholder’ A much larger group of Program Managers A changing and/or increasing set of PIs conducting the research Independence of Pis in the research

Constraints on the Role of Evaluation The evaluation must remain distinct from administration of the program It has no authority over the conduct of the science It cannot constrain the conduct of the science It must have concern for respondent burden What’s left? - a role of documenting and reporting rather than authoritative program change

Identifying the Challenges for Evaluation Establishing its credibility before a scientific community No equivocation with legal functions and responsibility of the NIH administration Determining what will meet the approval of diverse overseers and stakeholders Methodological requirements, and information collection without respondent burden Levels of assessment tailored to each component

The Challenges for Evaluation The need for everyone to understand evaluation, evaluation terms, in the same way The need for consensus, among evaluation leads, among science program administrators The need for comparability of reported information from one funded project to another

What We Did in Planning the Evaluation for all Contexts Outside of the box of contextual dimensions but still among them, without challenging them …… Putting Wholey, Rutman, Rossi to the test

Evaluation for all Contexts- the requirements Evaluation discussions ‘White papers’, and ‘lessons’ in evaluation to help understanding – - of the total design of an evaluation - to explain each next step - at-a-glance evaluation plans - iteratively among these - or Q & As on rationale/explanation A big effort for understanding, and correct conceptualizing from all

Evaluation for all Contexts The approach in Evaluation - - Evaluation by Objectives - objectives in the Requests for Applications - objectives in the awarded projects

Evaluation for all Contexts….? Conducted Evaluability Assessment – (from the literature) - User surveys – What would be best to show program achievement? What in a program can be subjected to realistic and meaningful evaluation? - Tailored the surveys to the content of components - Or conducted analysis of research applications to categorize them

Evaluation for all Contexts……? To accompany the user survey we developed simple Logic Models of the Program Components to operationalize the program/its components - goals, objectives and activities to achieve them (no time specified for outputs, outcomes)

Evaluation for all Contexts....? We asked (e.g.) : Is this model correct, does it portray the program concept ? What would be best to show program achievement ……… by the required reporting times, what will be ready in the program? We emphasized the need for accountability, but with fairness to the program This generated enthusiasm for a correct program model, an enthusiasm to perfect the model We achieved a crucial step towards ownership of the evaluation A case for ex ante Evaluability Assessment –

Evaluation for all Contexts…..? Persisted with ‘white papers’ or paragraphs, or Q & As on rationale/explanation Were able to guide their participation Were able to develop evaluable models of the Program to share with the program manager stakeholders The models

Evaluation for all Contexts….? Specified definitively phases of the evaluation - Process Evaluation – if reporting time is required before program/funding period completion - Outcome or Goal-oriented Evaluation – at program completion/funding period completion Impact Evaluation Utilization Assessment

Evaluation for all Contexts….? The evaluable models had: - objectives, activities, anticipated outcomes, time to achieve, indicators of achievement - the contexts were not compromised Developed with participation from sc. Pgm staff

Evaluation for all Contexts….? We developed Evaluation Questions – - for the evaluable objectives specified We specified information sources We convinced program staff of the need for the primary source information

So do we have Evaluation Design and Planning for Multiple Contexts? We used the approach and E-A for simpler programs, with much success We introduced these for 4 complex programs National Centers for Biomedical Computing The Roadmap Interdisciplinary Research and Research Training The Roadmap Clinical and Translational Service Awards The Roadmap Epigenomics Program We progressed farthest with the last program Success = the approach in evaluation was accepted; programs were evaluated; evaluation information informative and useful

Evaluation Design and Planning for Multiple Contexts? We have consensus for the evaluation approach Buy-in from the Workgroup Participatory Evaluation and ownership from science program managers No backward steps, and feasible evaluation questions Evaluation that works with the NIH Roadmap concept - can accommodate the different study periods and objectives of new projects from re-issue, and stimulus-funding projects…… …….The contexts are under control ………

Anticipating Some Problems If an ‘integrated program’ is what makes the difference how is it to be measured? – by opinion or objective indicators? - ex post /ex ante? - earliest projects will be conducted before integration – can a ‘begin’ time be specified? - can we show integration to be a program objective or as value added? - integration of components with different functions will be more challenging than components doing the same functions A tailored evaluation with external validity? We may have to ‘pull back’ from some evaluation questions

Designing Evaluation for Complex Multiple-Component Research Programs From our experience, Ex ante Evaluability Assessment Evaluation by Objectives Tailoring the Evaluation Will get Ownership and Participatory Evaluation from the Science Program Managers Accountability with fairness to the program Accommodation of the multiple contexts…without evaluation intervening the contexts

Selected References Rossi, P.H., Freeman, H.E. (1993). Evaluation: A systematic approach, 5th Ed. Sage Publications, Newbury Park, CA. . Rutman, L. (1980) Planning Useful Evaluations: Evaluability Assessment. Beverly Hills:Sage Publications. Roessner, D. (2000) Choice of Measures: Quantitative and qualitative methods and measures in the evaluation of research. Research Evaluation, 8(2) 125-132. Thank you