Download presentation
Presentation is loading. Please wait.
Published byBuck Boone Modified over 8 years ago
1
SOCI 4466 PROGRAM & POLICY EVALUATION LECTURE #8 1. Evaluation projects 2. Take-home final 3. Questions?
2
2. Strategies for Impact Assessment impact: the net effects of a program - the effects that can be uniquely attributed to the program intervention, controlling for the confounding effects of other variables/sources of change impact assessments can be carried out at virtually any stage of the program - piloting, program design, implementation, monitoring, outcome evaluation all impact assessments are comparative - comparing the net effect on those who got the program as compared to some other group - either themselves earlier, a control group, those in an alternative program, etc.
3
strongest approach to assessing impact is the use of the randomized experimental model Exp -R0X0 Con -R00
4
pre-requisites for assessing impacts: 1. clearly defined goals and objectives that can be operationalized 2. proper implementation of the intervention note here the considerable difficulties evaluators face in ensuring the above two criteria are met
5
the three criteria of causality: 1. correlation 2. temporal asymmetry 3. non-spuriousness note the difficulty in demonstrating that a program intervention is the “cause” of a specific outcome - the issue of causation versus correlation - bias in selection of targets - “history” - intervention (Hawthorne) effects - poor measurement
6
Campbell versus Cronbach: perfect versus good enough impact assessments - lack of experimental control - inability to randomize - “history” - time/money restraints - balancing the importance and impact of the program against practicality gross versus net outcomes Gross= Effects of + Effects of+ Design outcomeintervention other processes Effects (net effect) (extraneous factors)
7
extraneous confounding factors: - uncontrolled selection (selection bias) - both agency/self selection - “deselection” processes - the drop-out problem - endogenous change (naturally occurring change processes, like healing, learning) - secular drift - interfering effects (history) - maturational and developmental effects
8
design effects: - stochastic effects: chance fluctuations - the difference between real change and random change - the importance of sampling here, allowing the use of inferential statistics - statistical significance and statistical power: alpha: Type I error (false positive) beta: Type II error (false negative) - significance here of cell sizes and sample size - note differential concern with Type I or II error depending on program type
9
design effects (continued) - measurement reliability (qualitative/quantitative) - measurement validity (domain, internal consistency, predictive, concurrent) - experimenter/evaluator effects - missing data - sampling biases
10
choice of outcome measures - back to the measurement model, and reliability and validity - must be feasible to employ, responsive, exhaustive mutually exclusive and, ideally, quantitative - multiple measures best - direct versus indirect
11
isolating the effects of extraneous factors: - randomized controls - regression-discontinuity controls (pre-determined selection variables) - matched constructed controls - statistically-equated controls - reflexive controls (pre-post) - repeated measures reflexive controls (e.g. panel) - time series reflexive controls - generic controls (established norms, standards)
12
Full versus partial-coverage programs - if program is delivered to virtually all targets (full coverage), more difficult to find a design to assess impact (e.g. government-funded pension plans; OHIP) - partial coverage programs are not delivered to all targets, so there is opportunity to identify reasonable control/comparison groups
13
EXHIBIT 7 - F - HERE
14
judgmental impact assessments: - expert or “connoisseurial” assessments - administrator assessments - participants’ judgments the use of qualitative versus quantitative data
15
inference validity issues: - reproducibility of the evaluation design + results - generalizability - pooling evaluations - meta analysis
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.