CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.

Slides:



Advertisements
Similar presentations
Impact Evaluation Methods: Causal Inference
Advertisements

Active labour market measures and entrepreneurship in Poland Rafał Trzciński Impact Evaluation Spring School Hungary,
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Regression Discontinuity. Basic Idea Sometimes whether something happens to you or not depends on your ‘score’ on a particular variable e.g –You get a.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
Impact Evaluation Click to edit Master title style Click to edit Master subtitle style Impact Evaluation World Bank InstituteHuman Development Network.
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
TOOLS OF POSITIVE ANALYSIS
PAI786: Urban Policy Class 2: Evaluating Social Programs.
Randomized Control Trials for Agriculture Pace Phillips, Innovations for Poverty Action
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
The days ahead Monday-Wednesday –Training workshop on how to measure the actual reduction in HIV incidence that is caused by implementation of MC programs.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
Impact Evaluation Designs for Male Circumcision Sandi McCoy University of California, Berkeley Male Circumcision Evaluation Workshop and Operations Meeting.
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 21.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Public Policy Analysis ECON 3386 Anant Nyshadham.
 Get out your homework and materials for notes!  Take-home quiz due!
Lecture PowerPoint Slides Basic Practice of Statistics 7 th Edition.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Impact Evaluation for Evidence-Based Policy Making
What is randomization and how does it solve the causality problem? 2.3.
There and Back Again: An Impact Evaluator’s Tale Paul Gertler University of California, Berkeley July, 2015.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Empirical Tools of Public Finance 3.1 The Important.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Randomized Assignment Difference-in-Differences
Impact Evaluation Using Impact Evaluation for Results Based Policy Making Arianna Legovini Impact Evaluation Cluster, AFTRL Slides by Paul J. Gertler &
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Impact Evaluation Methods Randomization and Causal Inference Slides by Paul J. Gertler & Sebastian Martinez.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
Patricia Gonzalez, OSEP June 14, The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Impact Evaluation Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead, Africa Impact Evaluation Initiative AFTRL.
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Explanation of slide: Logos, to show while the audience arrive.
Quasi-Experimental Methods
1 Causal Inference Counterfactuals False Counterfactuals
Impact Evaluation Designs for Male Circumcision
Explanation of slide: Logos, to show while the audience arrive.
Class 2: Evaluating Social Programs
Class 2: Evaluating Social Programs
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Positive analysis in public finance
Presentation transcript:

CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010

Motivation 2  Goal of any evaluation is to estimate the causal effect of intervention X on outcome Y.  Example: does an education intervention improve test scores (learning)? Reducing class size Teacher training In-school nutrition

Causation is not correlation! 3  Any two variable (X and Y) can move together 1.Male teachers & academic performance of students. 2.Health and income.  But, they may have nothing to do with each other.  Other explanations?

Evaluation problem: Potential Outcomes Approach  Ideal way to evaluate the impact of an intervention:  observe agent in and out of program, at a point in time.  But, think about the only way in which we can evaluate the impacts of an intervention:  observe agent in or out of program, at any point in time.

How to assess causality?  Let Y= outcome of interest (test score) P= participation in program = 1 if in = 0 if out  Formally, program impact is: α = (Y | P=1) - (Y | P=0)  Program Impact: difference in outcomes for individuals in and out of program. Outcome w/ program Outcome w/out program

Another Way to Think of Evaluation Problem  The problem we face is that:  (Y | P=0) is not observed for program participants.  (Y | P=1) is not observed for non-participants  Missing Data Problem:  Counterfactual not observed.  what would have happened to agent without the intervention?

Solving the evaluation problem  Generate the counterfactual  find a control or comparison observation for agent facing the intervention.  Criteria for selecting comparison observation: 1. Observationally similar, at baseline (and after intervention). 2. Face same contemporaneous “shocks” as the treatment group.

“Counterfeit” Counterfactuals 1.Before and after:  Same individual before the treatment 2.Non-Participants:  Those who choose not to enroll in program  Those who were not offered the program

“Counterfeit” Counterfactual Number 1: Before and After 9  Consider how you might evaluate an agricultural assistance program.  Suppose program offers free/subsidized fertilizer.  Compare rice yields before and after  Q: If you find no change in rice yield, can you conclude the program failed?  What else changed? Drought? Lots of rainfall?

Scholarship Program and School Enrollment, Before and After Time Y After A B t-1 t O Before  Ultimate goal is to estimate α (Y it | P=1) - (Y i,t | P=0)  Estimate the impact on treated individuals: " A-O " =(Y i,t | P=1) - (Y i,t-1 | P=1)  Second, estimate counterfactual " B-O " =(Y i,t | P=0) - (Y i,t-1 | P=0)  “Impact” = A-B α’ α’

Scholarship Program and School Enrollment, Before and After Time Y After A B t-1 t O Before But, impact " A-B " may misrepresent true counterfactual.  Suppose C is the correct counterfactual.  Here, the impact of the intervention is " A-C ". α ’’ C

“Counterfeit” Counterfactual Number 2: Non-Participants…. 12  Compare non-participants to participants  Counterfactual: non-participant outcomes  Impact estimate: α i = (Y it | P=1) - (Y j,t | P=0)  Assumption: (Y j,t | P=0) = (Y i,t | P=0)  Issue: why did the j’s not participate?

Non-participants Example : Job Training and Employment 13  Compare employment & earning of individuals who sign up for training to those who do not.  Who signs up?  Those who are most likely to benefit, i.e. those with more ability  Would have higher earnings than non-participants without job training  Poor estimate of counterfactual

Non-participants Example 2: Health Insurance and Demand for Medical Care 14  Compare health care utilization (# doctor visits) of those who got insurance to those who did not.  But, who buys insurance?  those who expect large medical expenditures (unhealthy)  Those who do not buy insurance have less need for medical care.  Poor estimate of counterfactual

The problem is selection bias. 15  Selection bias: People choose to participate in program for specific reasons.  Problem occurs when reasons for participation are related to the outcome of interest:  Job Training: ability and earning  Health Insurance: health status and medical-care utilization.  Cannot separately identify impact of the program from these other factors/reasons

Need to know… 16  Know all reasons why someone gets the program and others not  reasons why individuals are in the treatment versus control group  If reasons correlated w/ outcome  cannot identify/separate program impact from other explanations of differences in outcomes

Possible Solutions… 17  We need to understand the data generation process  How beneficiaries are selected and how benefits are assigned  Guarantee comparability of treatment and control groups, so ONLY unaccounted for difference is the intervention.