Download presentation
Presentation is loading. Please wait.
Published by伯梦 蔚 Modified over 5 years ago
1
Summary Slide PROGRAM EVALUATION PA430 – Week 7 – 2/29-3/1/00
2
PROGRAM EVALUATION A systematic assessment of the operation and/or outcomes of a program or policy Comparison against a set of explicit of implicit standards Means of contributing to the improvement of the program or policy
3
Key Elements Systematic assessment
Focus is operations and outcomes of program Standards for comparison Purpose is to contribute to improvement or program/policy
4
Definitions Program – a specific government activity (examples: Head Start, toxic waste cleanup, immunization of children) Policy – a broad, officially recognized statement of objectives tied to a variety of government activities (examples: health and safety of medical products)
5
Main questions of evaluation research
Outcome evaluation Is the program reaching the goals it was meant to accomplish? Includes both intended and unintended results Impact evaluation What happens to participants as a result of the program?
6
Main questions of evaluation research
Process evaluation What is the program actually doing? What kinds of services are being provided? Are clients satisfied with the program? Helps in understanding of outcome data
7
Reasons for Evaluation Research
Formative – to help improve design of program. Improve delivery of program/activity. Summative – to provide information at the end of a program (or at least one cycle of it) about whether it should be continued or modified
8
How is evaluation research different from other types of research?
Utility – it is intended for use by program administrators Program-derived questions – questions are derived from concerns of program communities Judgmental quality – Tends to compare the “what is” with “what should we be”
9
Action setting – must be field-based research
Action setting – must be field-based research. Goals of evaluation sometimes conflict with goals of program or its administrators Role conflicts – it is difficult for program administrators to remove themselves from commitment to and positive view of their program
10
Publication – basic research is usually published in academic journals
Publication – basic research is usually published in academic journals. Most evaluation research is not published – remains “in-house” Allegiance – research has dual role – both program improvement and contribution to understanding of a particular policy area or evaluation research
11
Who performs evaluation research?
Three basic ways Assign the task to a staff member of the agency Hire an outside, independent evaluation or consulting firm (sometimes a university researcher) Open biding to all potential evaluations (often through an RFP – request for proposal) Periodic evaluation of programs often required as condition of grant (either public or private granting agency)
12
Inside vs Outside Evaluation Is the evaluator a member of the agency staff or not
Concerns over the evaluator’s perspective Do they have a stake in the study results Are they too removed from the program (too “ivory tower”) Is the evaluator competent
13
Objectivity – what are the possible biases of the evaluator
Program knowledge – how well does the evaluator understand the program (process, goals, etc.) Potential for utilization – evaluators often must take an active role in moving evaluation from report to action (implementation)
14
Step 1: Understand the Program
Begin with a general knowledge of the field but quickly develop an in-depth understanding of the specific program. How? Research by others in the general and specific area Written materials of the program Field research including interviews
15
Why this is important To develop a sense of the issues – separate the wheat from the chaff To formulate relevant, incisive questions To interpret the evidence/findings To make sound recommendations for program change or continuation To write a thorough, useable report
16
Step 1: Understand the Program
Develop a characterization of the program (reality vs the illusion) read previous evaluations talk to program directors observation data-based inquiry What is the program trying to achieve? Begin with official goals (if available) get other, more contemporary information from program managers communicate with clients
17
Step 1: Understand the Program
How does the program expect to achieve its goals? Not just did the program work, but what made it work examine program’s theories of change - the set of beliefs that underlie action an explanation of the causal links between program and outcomes
18
Step 2: Plan the Evaluation
Identify key questions for study decide on analysis method: quantitative, qualitative, or both develop measures to answer questions plan data collection to operationalize key measures plan an appropriate research design collect and analyze data write and disseminate report promote appropriate use of the results
19
Step 2: Plan the Evaluation
Additional considerations long-term vs short-term study questions should examine both intended and unintended impacts of program practicalities (clout of stakeholders, uncertainties, decision timetable) advisory committee ethical issues
20
Step 3: Develop measures
Desired outcomes effects on persons served effects on agencies effects on larger systems (networks) effects on the public unintended outcomes both positive and negative Interim markers of progress towards outcomes real changes desired may lie far in future
21
Step 3: Develop measures
components of program implementation (program processes) how the program is carried out how the program operates and for whom program quality resources, inputs, and environment budgets, staff, location management, years of operation client eligibility standards
22
Step 4: Collect data Data sources existing data informal interviews
observations formal interviews, written questionnaires program records data from outside institutions
23
Step 5: Select a program design
Identify people/units to be studied how study units will be selected kinds of comparisons to be drawn timing of the investigation Outcome studies underlying logic is: compare program participants before and after receiving program compare participants with non-participants
24
Step 5: Select a program design
Informal designs self-evaluation by administrators, staff, and clients expert judgment (outsider knowledge) Formal designs post-test only pre-test, post-test comparison group time series designs
25
Step 6: Analyze and interpret data
Whether quantitative or qualitative data, goal is to convert a mass of raw data into a coherent, organized report Types of analytical strategy describe, count factor, cluster (divide into parts) compare, find commonalities covariation tell the story
26
Step 7: Write the report What should the report look like? It depends!! May require more than one report (different audiences) comprehensive report may be required by sponsor of agency/evaluation - may be too much for most audiences executive summary
27
Step 7: Write the report Possible topics (full report)
summary of study results findings, implications, recommendations problem with which program deals nature of the program context (history, sponsorship, setting) beneficiaries staff how study was conducted suggestions for further evaluation
28
Step 7: Write the report Other report types
summary report for clients and public short, focus on highlights executive summary broader, useful for variety of audiences Ultimate goal: a report that is clear, timely, generalizable to other similar programs, inclusive of the organization's views, and of high utility
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.