Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.

Slides:



Advertisements
Similar presentations
N ON -E XPERIMENTAL M ETHODS Shwetlena Sabarwal (thanks to Markus Goldstein for the slides)
Advertisements

Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
1 COMM 301: Empirical Research in Communication Lecture 10 Kwan M Lee.
Differences-in-Differences
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Correlation AND EXPERIMENTAL DESIGN
Jeff Beard Lisa Helma David Parrish Start Presentation.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
TOOLS OF POSITIVE ANALYSIS
Experimental Design The Gold Standard?.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Development Impact Evaluation Initiative innovations & solutions in infrastructure, agriculture & environment naivasha, april 23-27, 2011 in collaboration.
Difference Two Groups 1. Content Experimental Research Methods: Prospective Randomization, Manipulation Control Research designs Validity Construct Internal.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Impact Evaluation in the Real World One non-experimental design for evaluating behavioral HIV prevention campaigns.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
S-005 Intervention research: True experiments and quasi- experiments.
Case Studies Harry Anthony Patrinos World Bank November 2009.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Causal Inference & Quasi-experimental Methods Arndt Reichert 22 June 2015 ieConnect Impact Evaluation Workshop Rio de Janeiro, Brazil June 22-25, 2015.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Making the Most out of Discontinuities Florence.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
Study Session Experimental Design. 1. Which of the following is true regarding the difference between an observational study and and an experiment? a)
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
What is randomization and how does it solve the causality problem? 2.3.
Copyright © 2016 Wolters Kluwer All Rights Reserved Chapter 7 Experimental Design I— Independent Variables.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Using Randomized Evaluations to Improve.
1 Joint meeting of ESF Evaluation Partnership and DG REGIO Evaluation Network in Gdańsk (Poland) on 8 July 2011 The Use of Counterfactual Impact Evaluation.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Impact evaluation of R&D support program.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Innovations in investment climate reforms: an impact evaluation workshop November 12-16, 2012, Paris Non Experimental Methods Florence Kondylis.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Quasi Experimental Methods I
Quasi Experimental Methods I
An introduction to Impact Evaluation
Quasi-Experimental Methods
Quasi-Experimental Methods
Chapter Eight: Quantitative Methods
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
ESF EVALUATION PARTNERSHIP MEETING Bernhard Boockmann / Helmut Apel
Development Impact Evaluation in Finance and Private Sector
Establishing the Direction of the Relationship
Impact Evaluation Methods
Matching Methods & Propensity Scores
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Presentation transcript:

Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods I

What we know so far Aim: We want to isolate the causal effect of our interventions on our outcomes of interest  Use rigorous evaluation methods to answer our operational questions  Randomizing the assignment to treatment is the “gold standard” methodology (simple, precise, cheap)  What if we really, really (really??) cannot use it?! >> Where it makes sense, resort to non-experimental methods

Non-experimental methods  Can we find a plausible counterfactual?  Natural experiment?  Every non-experimental method is associated with a set of assumptions  The stronger the assumptions, the more doubtful our measure of the causal effect  Question our assumptions ▪ Reality check, resort to common sense! 3

Example: Matching Grants Program  Principal Objective ▪ Increase firm productivity and sales  Intervention ▪ Matching grants distribution ▪ Non-random assignment  Target group ▪ SMEs with 1-10 employees  Main result indicator ▪ Sales 4

5 (+) Impact of the program (+) Impact of external factors Illustration: Matching Grants - Randomization

6 « After » difference btwn participants and non-participants Illustration: Matching Grants – Difference-in-difference « Before» difference btwn participants and nonparticipants >> What’s the impact of our intervention?

Difference-in-Differences Identification Strategy (1) Counterfactual: 2 Formulations that say the same thing 1. Non-participants’ sales after the intervention, accounting for the “before” difference between participants/nonparticipants (the initial gap between groups) 2. Participants’ sales before the intervention, accounting for the “before/after” difference for nonparticipants (the influence of external factors)  1 and 2 are equivalent 7

Data – Example 8

“After”-difference: P 08 -NP 08 =1.4 “Before”- difference: P 07 -NP 07 =1.0 “Before”- difference: P 07 -NP 07 =1.0 Impact=0.4

Difference-in-Differences Identification Strategy (2) Underlying assumption: Without the intervention, sales for participants and non participants would have followed the same trend >> Graphic intuition coming…

“After”-difference: P 08 -NP 08 =1.4 Impact=0.4 “Before”- difference: P 07 -NP 07 =1.0 “Before”- difference: P 07 -NP 07 =1.0

Estimated Impact =0.4 True Impact=-0.3

Summary  Assumption of same trend very strong  2 groups were, in 2007, producing at very different levels ➤ Question the underlying assumption of same trend! ➤ When possible, test assumption of same trend with data from previous years

Questioning the Assumption of same trend: Use pre-pr0gram data >> Reject counterfactual assumption of same trends !

Questioning the Assumption of same trend: Use pre-pr0gram data >>Seems reasonable to accept counterfactual assumption of same trend ?!

Caveats (1)  Assuming same trend is often problematic  No data to test the assumption  Even if trends are similar the previous year… ▪ Where they always similar (or are we lucky)? ▪ More importantly, will they always be similar? ▪ Example: Other project intervenes in our nonparticipant firms…

Caveats (2)  What to do? >> Be descriptive!  Check similarity in observable characteristics ▪ If not similar along observables, chances are trends will differ in unpredictable ways >> Still, we cannot check what we cannot see… And unobservable characteristics might matter more than observable (ability, motivation, patience, etc)

Matching Method + Difference-in-Differences (1) Match participants with non-participants on the basis of observable characteristics Counterfactual:  Matched comparison group  Each program participant is paired with one or more similar non-participant(s) based on observable characteristics >> On average, matched participants and nonparticipants share the same observable characteristics (by construction)  Estimate the effect of our intervention by using difference-in-differences 18

Matching Method (2) Underlying counterfactual assumptions  After matching, there are no differences between participants and nonparticipants in terms of unobservable characteristics AND/OR  Unobservable characteristics do not affect the assignment to the treatment, nor the outcomes of interest

How do we do it?  Design a control group by establishing close matches in terms of observable characteristics  Carefully select variables along which to match participants to their control group  So that we only retain ▪ Treatment Group: Participants that could find a match ▪ Comparison Group: Non-participants similar enough to the participants >> We trim out a portion of our treatment group!

Implications  In most cases, we cannot match everyone  Need to understand who is left out  Example Score Nonparticipants Participants Matched Individuals Wealth Portion of treatment group trimmed out

Conclusion (1)  Advantage of the matching method  Does not require randomization 22

Conclusion (2)  Disadvantages:  Underlying counterfactual assumption is not plausible in all contexts, hard to test ▪ Use common sense, be descriptive  Requires very high quality data: ▪ Need to control for all factors that influence program placement/outcome of choice  Requires significantly large sample size to generate comparison group  Cannot always match everyone… 23

Summary  Randomized-Controlled-Trials require minimal assumptions and procure intuitive estimates (sample means!)  Non-experimental methods require assumptions that must be carefully tested  More data-intensive  Not always testable  Get creative:  Mix-and-match types of methods!  Address relevant questions with relevant techniques 24

Thank you Financial support from: Bank Netherlands Partnership Program (BNPP), Bovespa, CVM, Gender Action Plan (GAP), Belgium & Luxemburg Poverty Reduction Partnerships (BPRP/LPRP), Knowledge for Change Program (KCP), Russia Financial Literacy and Education Trust Fund (RTF), and the Trust Fund for Environmentally & Socially Sustainable Development (TFESSD), is gratefully acknowledged.