Presentation is loading. Please wait.

Presentation is loading. Please wait.

Designing Evaluation for Complex Multiple-Component Programs

Similar presentations


Presentation on theme: "Designing Evaluation for Complex Multiple-Component Programs"— Presentation transcript:

1 Designing Evaluation for Complex Multiple-Component Programs
Genevieve deAlmeida-Morris National Institute on Drug Abuse National Institutes of Health Dept. of Health and Human Services

2 Designing Evaluation for Complex Multiple -component Research Programs
Research programs are complex – Biomedical research programs impact health and lives for generations - Counting and valuing benefits from the programs is difficult - Programs have time-related effects Benefits from averted incidence of disease are not actual Biomedical research increasing costs industry with ec efficiency only from translation to treatment and prevention

3 Contextual Dimensions of NIH Programs
Multiple components in a program - components working together, providing a service/product for use by other components conducting research - awards as a self-contained component with a set of functions –interdisciplinary research, clinical and translational research, community engagement - development of disciplines from these, research training - components at different stages in function, different progress rates

4 Contextual Dimensions
Context from Administration of the Program - Scientist-managers administering the programs for research conduct and progress for compliance with regulations - With the legal function and authority that evaluation does not have

5 Contextual Dimensions
Contexts from NIH Roadmap Initiative - funded from the common fund co-administration of a component, more than one scientist-manager lead, from different Institutes an over-arching workgroup participating in planning and decision-making, self-selected, from the Institutes - a set of Institute Directors - decisions go through multiple levels of review

6 Contextual Dimensions
In addition - new projects are funded each year - a research component can be added to individual projects funded by an Institute(s), not by the common fund - the RM program must be integrated – integration of components integration of functions

7 The Stakeholder Context Evaluation Faces
Evaluation faces a broader concept of ‘stakeholder’ A much larger group of Program Managers A changing and/or increasing set of PIs conducting the research Independence of Pis in the research

8 Constraints on the Role of Evaluation
The evaluation must remain distinct from administration of the program It has no authority over the conduct of the science It cannot constrain the conduct of the science It must have concern for respondent burden What’s left? - a role of documenting and reporting rather than authoritative program change

9 Identifying the Challenges for Evaluation
Establishing its credibility before a scientific community No equivocation with legal functions and responsibility of the NIH administration Determining what will meet the approval of diverse overseers and stakeholders Methodological requirements, and information collection without respondent burden Levels of assessment tailored to each component

10 The Challenges for Evaluation
The need for everyone to understand evaluation, evaluation terms, in the same way The need for consensus, among evaluation leads, among science program administrators The need for comparability of reported information from one funded project to another

11 What We Did in Planning the Evaluation for all Contexts
Outside of the box of contextual dimensions but still among them, without challenging them …… Putting Wholey, Rutman, Rossi to the test

12 Evaluation for all Contexts- the requirements
Evaluation discussions ‘White papers’, and ‘lessons’ in evaluation to help understanding – - of the total design of an evaluation - to explain each next step - at-a-glance evaluation plans - iteratively among these - or Q & As on rationale/explanation A big effort for understanding, and correct conceptualizing from all

13 Evaluation for all Contexts
The approach in Evaluation - - Evaluation by Objectives - objectives in the Requests for Applications - objectives in the awarded projects

14 Evaluation for all Contexts….?
Conducted Evaluability Assessment – (from the literature) - User surveys – What would be best to show program achievement? What in a program can be subjected to realistic and meaningful evaluation? - Tailored the surveys to the content of components - Or conducted analysis of research applications to categorize them

15 Evaluation for all Contexts……?
To accompany the user survey we developed simple Logic Models of the Program Components to operationalize the program/its components - goals, objectives and activities to achieve them (no time specified for outputs, outcomes)

16 Evaluation for all Contexts....?
We asked (e.g.) : Is this model correct, does it portray the program concept ? What would be best to show program achievement ……… by the required reporting times, what will be ready in the program? We emphasized the need for accountability, but with fairness to the program This generated enthusiasm for a correct program model, an enthusiasm to perfect the model We achieved a crucial step towards ownership of the evaluation A case for ex ante Evaluability Assessment –

17 Evaluation for all Contexts…..?
Persisted with ‘white papers’ or paragraphs, or Q & As on rationale/explanation Were able to guide their participation Were able to develop evaluable models of the Program to share with the program manager stakeholders The models

18 Evaluation for all Contexts….?
Specified definitively phases of the evaluation - Process Evaluation – if reporting time is required before program/funding period completion - Outcome or Goal-oriented Evaluation – at program completion/funding period completion Impact Evaluation Utilization Assessment

19 Evaluation for all Contexts….?
The evaluable models had: - objectives, activities, anticipated outcomes, time to achieve, indicators of achievement - the contexts were not compromised Developed with participation from sc. Pgm staff

20 Evaluation for all Contexts….?
We developed Evaluation Questions – - for the evaluable objectives specified We specified information sources We convinced program staff of the need for the primary source information

21 So do we have Evaluation Design and Planning for Multiple Contexts?
We used the approach and E-A for simpler programs, with much success We introduced these for 4 complex programs National Centers for Biomedical Computing The Roadmap Interdisciplinary Research and Research Training The Roadmap Clinical and Translational Service Awards The Roadmap Epigenomics Program We progressed farthest with the last program Success = the approach in evaluation was accepted; programs were evaluated; evaluation information informative and useful

22 Evaluation Design and Planning for Multiple Contexts?
We have consensus for the evaluation approach Buy-in from the Workgroup Participatory Evaluation and ownership from science program managers No backward steps, and feasible evaluation questions Evaluation that works with the NIH Roadmap concept - can accommodate the different study periods and objectives of new projects from re-issue, and stimulus-funding projects…… …….The contexts are under control ………

23 Anticipating Some Problems
If an ‘integrated program’ is what makes the difference how is it to be measured? – by opinion or objective indicators? - ex post /ex ante? - earliest projects will be conducted before integration – can a ‘begin’ time be specified? - can we show integration to be a program objective or as value added? - integration of components with different functions will be more challenging than components doing the same functions A tailored evaluation with external validity? We may have to ‘pull back’ from some evaluation questions

24 Designing Evaluation for Complex Multiple-Component Research Programs
From our experience, Ex ante Evaluability Assessment Evaluation by Objectives Tailoring the Evaluation Will get Ownership and Participatory Evaluation from the Science Program Managers Accountability with fairness to the program Accommodation of the multiple contexts…without evaluation intervening the contexts

25 Selected References Rossi, P.H., Freeman, H.E. (1993). Evaluation: A systematic approach, 5th Ed. Sage Publications, Newbury Park, CA. . Rutman, L. (1980) Planning Useful Evaluations: Evaluability Assessment. Beverly Hills:Sage Publications. Roessner, D. (2000) Choice of Measures: Quantitative and qualitative methods and measures in the evaluation of research. Research Evaluation, 8(2) Thank you


Download ppt "Designing Evaluation for Complex Multiple-Component Programs"

Similar presentations


Ads by Google