Download presentation
Presentation is loading. Please wait.
Published byLee Hart Modified over 8 years ago
1
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank
2
2 Background Initial response to AIDS epidemic: responding to a crisis. –Build political commitment –Establish institutions –Scale up coverage of HIV prevention, treatment and mitigation services Now we need to know what really works?
3
3 Traditional M&E and Impact Evaluation M&E: monitoring & process evaluation Descriptiveanalysis Causalanalysis ▫ What was the effect of the program on outcomes? ▫ How would outcomes change under alternative program designs? ▫ Is the program cost-effective? ▫ Is program being implemented efficiently? ▫ Is targeted population being reached? ▫ Are outcomes moving in the right direction? Impact Evaluation:
4
4 Why does impact evaluation matter? To know if the program had an impact and the average size of that impact: –Assess if policies work –Assess the net benefits/costs of the program –Assess the distribution of gains and losses
5
5 What do we mean by “impact evaluation” ? Impact = the difference between the relevant outcome indicator with the program and that without it. However, we can never observe someone in two different states of nature at the same time. While a post-intervention indicator is observed, its value in the absence of the program is not, i.e., it is a counter-factual. So all IE deals with overcoming the issue of missing data. Requires counterfactual analysis. The IE problem
6
6 Given the problem of missing data (individuals have only 1 existence) we can compare 2 groups ……we need: A counterfactual: a control/comparison group that will allow us to attribute any change in the participant group to the intervention (causality) what would have happened without the program What we need
7
7 Common IE practices 1. Before and after 2. Participants-non-participants BUT, it’s difficult to assess the TRUE AVERAGE CAUSAL EFFECT How to solve the FUNDAMENTAL PROBLEM OF EVALUATION?
8
8 Comparison group issues Two central problems: –Programs are targeted Program areas will differ in observable and unobservable ways precisely because the program intended this –Individual participation is (usually) voluntary Participants will differ from non-participants in observables and unobservable ways.
9
9 Tools to identify a Counterfactual Randomized Designs Quasi-experimental Designs –Matching –Instrumental variables –Regression discontinuity
10
10 Some general principles to consider when planning an IE Government ownership—what matters is institutional buy-in Relevance and applicability—asking the right questions Flexibility and adaptability Horizon matters
11
11 Summing up: Methods/Practicalities Randomization is the “gold standard” Be flexible, be creative – use the context IE requires good monitoring and monitoring will help you understand the effect size
12
12 Summing up: Methods/Practicalities Making IE works for you may require a change in the culture of project design and implementation…..that is to maximize the evidence-based upon which policy decisions can be made to improve the chances for success Impact evaluation is more than a tool – it is an analytical framework for policy development
13
13 THANK YOU
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.