Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.

Slides:



Advertisements
Similar presentations
Inferential Statistics and t - tests
Advertisements

Designing an impact evaluation: Randomization, statistical power, and some more fun…
The World Bank Human Development Network Spanish Impact Evaluation Fund.
N ON -E XPERIMENTAL M ETHODS Shwetlena Sabarwal (thanks to Markus Goldstein for the slides)
Measuring Impact: lessons from the capacity building cluster SWF Impact Summit 2 nd October 2013 Leroy White University of Bristol Capacity Building Cluster.
Treatment Evaluation. Identification Graduate and professional economics mainly concerned with identification in empirical work. Concept of understanding.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Estimating net impacts of the European Social Fund in England Paul Ainsworth Department for Work and Pensions July 2011
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Evaluation of the impact of the Natural Forest Protection Programme on rural household incomes Katrina Mullan Department of Land Economy University of.
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
Differences-in-Differences
The World Bank Human Development Network Spanish Impact Evaluation Fund.
TREASURE COAST EARLY STEPS COMMUNITY PROVIDERS Onboard TRAINING SERIES
Pooled Cross Sections and Panel Data II
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
TOOLS OF POSITIVE ANALYSIS
PAI786: Urban Policy Class 2: Evaluating Social Programs.
Some perspectives on the importance of policy evaluation Joost Bollens HIVA- K.U.Leuven 1Joost Bollens.
Non Experimental Design in Education Ummul Ruthbah.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Methodological Problems Faced in Evaluating Welfare-to-Work Programmes Alex Bryson Policy Studies Institute Launch of the ESRC National Centre for Research.
October 15H.S.1 Causal inference Hein Stigum Presentation, data and programs at:
Instrumental Variables: Problems Methods of Economic Investigation Lecture 16.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Matching Estimators Methods of Economic Investigation Lecture 11.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Impact Evaluation for Evidence-Based Policy Making
What is randomization and how does it solve the causality problem? 2.3.
Studying the Mean and Variation in the Effect of Program Participation in Multi-site Trials The research reported here was supported by a grant from the.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Randomized Assignment Difference-in-Differences
Impact Evaluation Using Impact Evaluation for Results Based Policy Making Arianna Legovini Impact Evaluation Cluster, AFTRL Slides by Paul J. Gertler &
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
1 Joint meeting of ESF Evaluation Partnership and DG REGIO Evaluation Network in Gdańsk (Poland) on 8 July 2011 The Use of Counterfactual Impact Evaluation.
Compare and Order Integers SWBAT compare and order integers.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Kenya Evidence Forum - June 14, 2016 Using Evidence to Improve Policy and Program Designs How do we interpret “evidence”? Aidan Coville, Economist, World.
Lurking inferential monsters
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Quasi Experimental Methods I
Identification: Difference-in-Difference estimator
TREASURE COAST EARLY STEPS COMMUNITY PROVIDERS Onboard TRAINING SERIES
A Positive Approach to Consulting in Schools
Quasi-Experimental Methods
1 Causal Inference Counterfactuals False Counterfactuals
Positive analysis in public finance
Statistical Inference
Counterfactual Analysis
Estimating net impacts of the European Social Fund in England
Title Team Members.
Presentation transcript:

Measuring causal impact 2.1

What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between what happened and what would have happened without the program But we never observe both conditions – What happened with the program – What would have happened without the program

Counterfactual The counterfactual is what would have happened if the program or policy had not been implemented We never directly observe the counterfactual – We can never see the same person with and without the program at the same time We have to mimic the counterfactual

Time Primary Outcome Impact Counterfactual Intervention Measuring Impact

Mimicking the counterfactual Because there is no exact replica of a single person we look for a group of people that on average are the same as participants would have been without the program Do participants prior to the program make a good counterfactual? Do those who choose not to participate make a good counterfactual?

Selection Programs are started in specific places and at specific times for a reason, they are selected People choose or select to participate or not participate This selection process means that participants and non participants are on average different not just because of the program Nonparticipants are not always a good counterfactual Because people select when to take up a program, participants prior to the program may not be a good counterfactual

Selection bias If we compare outcomes for those with and without the program the difference will have two parts: i) That caused by the program ii) That caused by underlying differences between participants and nonparticipants Our estimate of impact will not, on average, be equal to the true impact of the program unless ii) is zero and there is no selection bias

Example: Testing the impact of training Time Wages Skills program X X Ashenfelter Dip