AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.

Slides:



Advertisements
Similar presentations
N ON -E XPERIMENTAL M ETHODS Shwetlena Sabarwal (thanks to Markus Goldstein for the slides)
Advertisements

Advantages and limitations of non- and quasi-experimental methods Module 2.2.
Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
1 COMM 301: Empirical Research in Communication Lecture 10 Kwan M Lee.
Differences-in-Differences
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Correlation AND EXPERIMENTAL DESIGN
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Pooled Cross Sections and Panel Data II
Statistics Micro Mini Threats to Your Experiment!
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Development Impact Evaluation Initiative innovations & solutions in infrastructure, agriculture & environment naivasha, april 23-27, 2011 in collaboration.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Difference Two Groups 1. Content Experimental Research Methods: Prospective Randomization, Manipulation Control Research designs Validity Construct Internal.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Impact Evaluation in the Real World One non-experimental design for evaluating behavioral HIV prevention campaigns.
Chapter 13 Notes Observational Studies and Experimental Design
Measuring Impact: Experiments
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Case Studies Harry Anthony Patrinos World Bank November 2009.
Experimental Design making causal inferences Richard Lambert, Ph.D.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
Matching Estimators Methods of Economic Investigation Lecture 11.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
1 Experimental Research Cause + Effect Manipulation Control.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
What is randomization and how does it solve the causality problem? 2.3.
Measuring Impact 1 Non-experimental methods 2 Experiments
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Laura Chioda.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Innovations in investment climate reforms: an impact evaluation workshop November 12-16, 2012, Paris Non Experimental Methods Florence Kondylis.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Differences-in-Differences
Chapter 9 Roadmap Where are we going?.
CHAPTER 4 Designing Studies
Quasi Experimental Methods I
Quasi Experimental Methods I
An introduction to Impact Evaluation
Quasi-Experimental Methods
Quasi-Experimental Methods
2 independent Groups Graziano & Raulin (1997).
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Development Impact Evaluation in Finance and Private Sector
Impact Evaluation Methods
Matching Methods & Propensity Scores
Statistical Reasoning December 8, 2015 Chapter 6.2
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Presentation transcript:

AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis

Aim: We want to isolate the causal effect of our interventions on our outcomes of interest  Use rigorous evaluation methods to answer our operational questions  Randomizing the assignment to treatment is the “gold standard” methodology (simple, precise, cheap)  What if we really, really (really??) cannot use it?! >> Where it makes sense, resort to non-experimental methods

 Find a plausible counterfactual  Every non-experimental method is associated with a set of assumptions  The stronger the assumption, the more doubtful our measure of the causal effect ▪ Question our assumptions ▪ Reality check, resort to common sense! 3

 Principal Objective ▪ Increase maize production  Intervention ▪ Fertilizer Vouchers distribution ▪ Non-random assignment  Target group ▪ Maize producers, land over 1 Ha & under 5 Ha  Main result indicator ▪ Maize yield 4

5 (+) Impact of the program (+) Impact of external factors

6 (+) BIASED Measure of the program impact “Before-After” doesn’t deliver results we can believe in!

7 « After » difference btw participants and non-participants « Before» difference btw participants and nonparticipants >> What’s the impact of our intervention?

Counterfactual: 2 Options 1. Non-participant maize yield after the intervention, accounting for the “before” difference between participants/nonparticipants (the initial gap between groups) 2. Participant maize yield before the intervention, accounting for the “before/after” difference for nonparticipants (the influence of external factors)  1 and 2 are equivalent 8

Underlying assumption: Without the intervention, maize yield for participants and non participants’ would have followed the same trend >> Graphic intuition coming…

10

11

12 NP NP 2007 =0.8 Impact = (P P 2007 ) -(NP NP 2007 ) = 0.6 – 0.8 = -0.2 Impact = (P P 2007 ) -(NP NP 2007 ) = 0.6 – 0.8 = -0.2 P P 2007 =0.6

13 P-NP 2008 =0.5 Impact = (P-NP) (P-NP) 2007 = = -0.2 Impact = (P-NP) (P-NP) 2007 = = -0.2 P-NP 2007 =0.7

Impact=-0.2

 Negative Impact:  Very counter-intuitive: Increased input use should increase yield once external factors are accounted for!  Assumption of same trend very strong  2 groups were, in 2007, producing at very different levels ➤ Question the underlying assumption of same trend! ➤ When possible, test assumption of same trend with data from previous years

>> Reject counterfactual assumption of same trends !

17

18 NP 08 -NP 07 =0.2 Impact = (P P 2007 ) -(NP NP 2007 ) = 0.6 – 0.2 = Impact = (P P 2007 ) -(NP NP 2007 ) = 0.6 – 0.2 = + 0.4

Impact = +0.4

 Positive Impact:  More intuitive  Is the assumption of same trend reasonable? ➤ Still need to question the counterfactual assumption of same trends ! ➤ Use data from previous years

>>Seems reasonable to accept counterfactual assumption of same trend ?!

 Assuming same trend is often problematic  No data to test the assumption  Even if trends are similar in the past… ▪ Where they always similar (or are we lucky)? ▪ More importantly, will they always be similar? ▪ Example: Other project intervenes in our nonparticipant villages…

 What to do? >> Be descriptive!  Check similarity in observable characteristics ▪ If not similar along observables, chances are trends will differ in unpredictable ways >> Still, we cannot check what we cannot see… And unobservable characteristics might matter more than observable (ability, motivation, etc)

Match participants with non-participants on the basis of observable characteristics Counterfactual:  Matched comparison group  Each program participant is paired with one or more similar non-participant(s) based on observable characteristics >> On average, participants and nonparticipants share the same observable characteristics (by construction)  Estimate the effect of our intervention by using difference-in-differences 24

Underlying counterfactual assumptions  After matching, there are no differences between participants and nonparticipants in terms of unobservable characteristics AND/OR  Unobservable characteristics do not affect the assignment to the treatment, nor the outcomes of interest

 Design a control group by establishing close matches in terms of observable characteristics  Carefully select variables along which to match participants to their control group  So that we only retain ▪ Treatment Group: Participants that could find a match ▪ Control Group: Non-participants similar enough to the participants >> We trim out a portion of our treatment group!

 In most cases, we cannot match everyone  Need to understand who is left out  Example Score Nonparticipants Participants Matched Individuals Wealth Portion of treatment group trimmed out

 Advantage of the matching method  Does not require randomization 28

 Disadvantages:  Underlying counterfactual assumption is not plausible in all contexts, hard to test ▪ Use common sense, be descriptive  Requires very high quality data: ▪ Need to control for all factors that influence program placement/outcome of choice  Requires significantly large sample size to generate comparison group  Cannot always match everyone… 29

 Randomized-Controlled-Trials require minimal assumptions and procure intuitive estimates (sample means!)  Non-experimental methods require assumptions that must be carefully tested  More data-intensive  Not always testable  Get creative: Mix-and-match types of methods! 30

31