Development Impact Evaluation Initiative Innovations in investment climate reforms Paris, Nov 13, 2012 In collaboration with the Investment Climate Global.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
Market Assessment for Small Businesses. Lecture Contents Marketing Mix/ Demand/ Demand Estimation Sampling Plan/ Data Collection and Analysis Market Survey.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
What is Economics? Chapter 18.
Mywish K. Maredia Michigan State University
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Lecture 07 Marketing. Working Definition of the concept > – The process of determining customer wants and needs and – then providing.
Differences-in-Differences
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
BUSINESS AND FINANCIAL LITERACY FOR YOUNG ENTREPRENEURS: EVIDENCE FROM BOSNIA-HERZEGOVINA Miriam Bruhn and Bilal Zia (World Bank, DECFP)
Contemporary Models of Development and Underdevelopment
PHSSR IG CyberSeminar Introductory Remarks Bryan Dowd Division of Health Policy and Management School of Public Health University of Minnesota.
Public Policy & Evidence: How to discriminate, interpret and communicate scientific research to better inform society. Rachel Glennerster Executive Director.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
PAI786: Urban Policy Class 2: Evaluating Social Programs.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Trade Liberalization and the Politics of Financial Development Matías Braun, UCLA Claudio Raddatz, World Bank LAFN Dec 3 rd,2004.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Measuring Impact: Experiments
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Multinational Cost of Capital & Capital Structure 17 Chapter South-Western/Thomson Learning © 2003.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
Private involvement in education: Measuring Impacts Felipe Barrera-Osorio HDN Education Public-Private Partnerships in Education, Washington, DC, March.
Why Use Randomized Evaluation? Presentation by Shawn Cole, Harvard Business School and J-PAL Presenter: Felipe Barrera-Osorio, World Bank 1 APEIE Workshop.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
Impact Evaluation Designs for Male Circumcision Sandi McCoy University of California, Berkeley Male Circumcision Evaluation Workshop and Operations Meeting.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Impact Evaluation for Evidence-Based Policy Making
What is randomization and how does it solve the causality problem? 2.3.
AADAPT Workshop for Impact Evaluation in Agriculture and Rural Development Goa, India 2009 With generous support from Gender Action Plan.
Multinational Cost of Capital & Capital Structure.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Targeting Outcomes, Redux Coady, Grosh, and Hoddinott (forthcoming in World Bank Research Observer) Presentation at Reaching the Poor Conference Washington,
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Using Randomized Evaluations to Improve.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Rethinking Industrial Policy Justin Lin September 14, 2009.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Impact Evaluation Methods Randomization and Causal Inference Slides by Paul J. Gertler & Sebastian Martinez.
IMPLEMENTATION AND PROCESS EVALUATION PBAF 526. Today: Recap last week Next week: Bring in picture with program theory and evaluation questions Partners?
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
EXTERNAL INFLUENCES These are factors that the business can not control (External constraints) PESTEL Analysis is a part of the external analysis that.
Measuring Results and Impact Evaluation: From Promises into Evidence
An introduction to Impact Evaluation
Impact Evaluation Terms Of Reference
Development Impact Evaluation in Finance and Private Sector
Implementation Challenges
III. Practical Considerations in preparing a CIE
Impact Evaluation Designs for Male Circumcision
Class 2: Evaluating Social Programs
Class 2: Evaluating Social Programs
Presentation transcript:

Development Impact Evaluation Initiative Innovations in investment climate reforms Paris, Nov 13, 2012 In collaboration with the Investment Climate Global Practice Causality and Evaluation Methods: Randomization Bilal Zia (DECFP) 1

Why Worry about Causality?  Imagine you are a naïve policymaker working in a country that suffers from persistent drought  On a lazy Sunday, you come across an article in the NY Times about the latest trend in fancy umbrellas  Funny, you note, how people tend to carry umbrellas in cities where it rains a lot

Why Worry about Causality? (2)  Empirically, this means that “carrying umbrellas” and “rain” are definitely correlated.

Why Worry about Causality? (2)  Empirically, this means that “carrying umbrellas” and “rain” are definitely correlated.  But…

Why Worry about Causality? (2)  Empirically, this means that “carrying umbrellas” and “rain” are definitely correlated.  But…  Does carrying an umbrella cause it to rain?

Correlations Can Lead to Bad Policy Advice  Suppose you conclude that umbrellas cause rain.  Policy advice: distribute free umbrellas in your drought ridden country!

Correlations Can Lead to Bad Policy Advice  Suppose you conclude that umbrellas cause rain.  Policy advice: distribute free umbrellas in your drought ridden country!  Not only does this make you look silly (and funny!), but it wastes valuable resources

A More Plausible Example Business Registration 1) Higher Profits

A More Plausible Example Rather than rubberstamping the program, questions should shift: -Are costs too high? - Are benefits too low? - Insufficient advertising? Business Registration OR 2) 1) Higher Profits Only highly profitable firms choose to register

Main Takeaway  Identifying causality matters for policy  This is not a joke academics play on others for fun  Funding, time, and other resources are at stake if we go down the wrong pathway, no matter how well intended it may be

How to Get to Causality?  Identify control/counterfactual group(s)!  Choose an evaluation methodology:  Randomization (prospective)  Observational Studies (backward looking)  Gather data

12 Is this the impact of the program? Impact Assessment Example

13 (+) Impact of the program (+) Impact of other (external) factors The Value of a Control Group

Impact Evaluation and Causality  Impact evaluation seeks to understand the causal effect of a program  separate the impact of the program from other factors  Need to find out what would have happened without the program, or with an alternative strategy 14

What is Impact Evaluation? Counterfactual analysis  Compare same firm with & without treatment, information etc. at the same point in time to measure the difference  This is impossible! The solution: Use a control group  Need to identify firms that represent what the treatment group would have been like if there was no project

Control Group Quality  But Control Group has to be good!  Projects started at specific times and places for particular reasons  What is a good control group?  By design treatment and comparison have the same characteristics (observed and unobserved), on average  Only difference is treatment  Control group represents what would have happened to the treatment population if the project had not occurred 16

Selection Bias  Can we just compare people who received the project to anyone who didn’t receive the project?  Danger of Selection Bias  What was the reason that some people received it and others didn’t?  Selection bias a major issue for impact evaluation  Projects started at specific times and places for particular reasons  Participants may select into programs (eligibility criteria)

How to create Control Group?  Need to find a group of non-treated people who can proxy for people who received treatment  This is hard: there is normally some reason why some people received treatment and others not, meaning any differences might not be due to the project

Randomized Experimental Design  Randomly assign potential beneficiaries to be in the treatment or comparison group  By design treatment and comparison have the same characteristics (observed and unobserved), on average  Only difference is treatment  With large sample, all characteristics average out  Unbiased impact estimates 19

Can we Randomize?  Randomization does not mean denying people the benefits of the project  Usually there are constraints within project implementation that allow randomization  Randomization is the fairest way to allocate treatment

Randomization is Very Flexible Must get the program whatever Not suitable for the program Randomize who gets the program

Unit of Randomization  For statistical power, randomizing at the individual level is best  Randomizing at higher level sometimes necessary:  Political constraints on differential treatment within community  Practical constraints—confusing for one person to implement different versions  Spillover effects may require higher level randomization  Randomizing at group level requires many groups because of within community correlation 22

Group or individual randomization?  Sample size and unit of randomization Individual randomizationGroup randomization N=16 N=4

 A good choice when there is incomplete take-up of the product of service  Those who get/receive promotion or marketing are more likely to enroll  But who got promotion or marketing was determined randomly, so not correlated with other observables or non-observables  Compare average outcomes of two groups: promoted/not promoted  Effect of offering the encouragement (Intent-To-Treat)  Effect of the intervention on the complier population (Local Average Treatment Effect) ▪ LATE= effect of offering program (ITT)/proportion of those who took it up

Opportunities to Randomize? Business Registration Example  Motivation: Businesses do not enter or formalize because of the associated time and financial costs (costs outweigh benefits)  Intervention: Reduce days to register from 10 – 5; reduce cost from $200 - $100  Expected result: increased firm entry and formalization, leading to increased competition and growth  But: What is the optimal registration fee to maximize participation and social benefits?

Test…a question…through variation in treatment Variation in intervention What is the price elasticity of demand for registration fees? Offer subsidies: $100; $50; $10 to different groups Complementary interventions Can financial training on banking products benefits from formalization? Offer training to a subset of businesses InformationWhat is the most effective way to communicate the new reform? Test phone calls, flyers, public events, radio adverts on different groups Firm behaviorDoes framing influence firm perceptions? Highlight the benefits of registering to one group and the threats/risks of not registering to another Supply side incentives Can staff recognition improve performance? Offer non-financial vs. financial incentives to processing staff.

Matching Grants Example  Motivation: Firms invest sub-optimally from a social perspective when public returns are greater than private. MGs can incentivize investment and increase social returns  Intervention: matching grant to incentive investment in business dev. Services in the biotech industry  Expected result: Increased investment > improved competition and knowledge spillovers  But: What is the optimal matching amount?

Test…a question…through variation in treatment Variation in intervention What is the optimal matching proportion to maximize private investment? Vary matching amount: 25%; 50%; 75% Complementary interventions Can credit guarantees on the firm’s payment increase claim rate? Offer credit guarantees to a subset of firms InformationDo firms know best what support they need, or can they benefit from guidance? Give a group of potential applicants information on how various services may help their business and which service providers could offer help Firm behaviorHow do firms respond to deadlines? XXX Supply side incentives Can staff be incentivized to broaden the geographical reach of the program to support marginalized firms? Link performance payments to geographical targets for a subset of staff

What if Randomization Not Possible?  Randomization is the gold standard, though not the end all be all of evaluation  Many times, randomization is simply not possible or feasible:  Evaluate effect of exchange rate shift/inflation policy  Life saving vaccination (who wants to be in the control group?)  Interventions that have already happened  Other compelling scientific methods:  Difference in Difference / Fixed Effects  Regression Discontinuity Design  Instrumental Variables

Main Takeaway Source: