A. CAUSAL EFFECTS Eva Hromádková, 7.10.2010 Applied Econometrics JEM007, IES Lecture 2A.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

Designing an impact evaluation: Randomization, statistical power, and some more fun…
REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Choosing the level of randomization
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
How to randomize? povertyactionlab.org. Quick Review: Why Randomize Choosing the Sample Frame Choosing the Unit of Randomization Options How to Choose.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
Experimental Research Designs
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
PHSSR IG CyberSeminar Introductory Remarks Bryan Dowd Division of Health Policy and Management School of Public Health University of Minnesota.
Correlation AND EXPERIMENTAL DESIGN
Statistics Micro Mini Threats to Your Experiment!
PEPA is based at the IFS and CEMMAP © Institute for Fiscal Studies Identifying social effects from policy experiments Arun Advani (UCL & IFS) and Bansi.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Association vs. Causation
Chapter 8 Experimental Research
Experimental Design The Gold Standard?.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Health Programme Evaluation by Propensity Score Matching: Accounting for Treatment Intensity and Health Externalities with an Application to Brazil (HEDG.
Measuring Impact: Experiments
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Sample Size Determination Donna McClish. Issues in sample size determination Sample size formulas depend on –Study design –Outcome measure Dichotomous.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Shawn Cole Harvard Business School Threats and Analysis.
Evaluating Job Training Programs: What have we learned? Haeil Jung and Maureen Pirog School of Public and Environmental Affairs Indiana University Bloomington.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Application 3: Estimating the Effect of Education on Earnings Methods of Economic Investigation Lecture 9 1.
Experiments Main role of randomization: Assign treatments to the experimental units. Sampling Main role of randomization: Random selection of the sample.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Application 2: Minnesota Domestic Violence Experiment Methods of Economic Investigation Lecture 6.
AP STATISTICS Section 5.1 Designing Samples. Objective: To be able to identify and use different sampling techniques. Observational Study: individuals.
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
What is randomization and how does it solve the causality problem? 2.3.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
CHOOSING THE LEVEL OF RANDOMIZATION. Unit of Randomization: Individual?
Procrastination, deadlines, and performance: Self-control by pre-commitment
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Randomized Evaluations: Applications Kenny Ajayi September 22, 2008 Global Poverty and Impact Evaluation.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Issues in Treatment Study Design John Whyte, MD, PhD Neuro-Cognitive Rehabilitation Research Network Moss Rehabilitation Research Institute.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
STRUCTURAL MODELS Eva Hromádková, Applied Econometrics JEM007, IES Lecture 10.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
Common Pitfalls in Randomized Evaluations Jenny C. Aker Tufts University.
INSTRUMENTAL VARIABLES Eva Hromádková, Applied Econometrics JEM007, IES Lecture 5.
Experimental Evaluations Methods of Economic Investigation Lecture 4.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Kenya Evidence Forum - June 14, 2016 Using Evidence to Improve Policy and Program Designs How do we interpret “evidence”? Aidan Coville, Economist, World.
Overview on Randomized Controlled Trials
Threats and Analysis.
Ron Sterr Kim Sims Heather Cruz aka “The Carpool”
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Chapter 6 Research Validity.
Matching Methods & Propensity Scores
Impact Evaluation Designs for Male Circumcision
Explanation of slide: Logos, to show while the audience arrive.
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Positive analysis in public finance
Sample Sizes for IE Power Calculations.
Presentation transcript:

A. CAUSAL EFFECTS Eva Hromádková, Applied Econometrics JEM007, IES Lecture 2A

Problem of causal inference  Want to test whether treatment (d) affects outcome (y) => TREATMENT EFFECT  !!! Correlation does not imply causation !!!  There might exist unobserved factors that drive this correlation  What would happen if an individual was (not) under a particular treatment?

Treatment effect I. Potential vs. observed outcome  Imagine individual has two potential outcomes Outcome if he is treated (d=1) Outcome if he is not treated (d=0)  Obviously, only one scenario is realized  Plugging (2) into (1) Note: individual return to treatment

Treatment effect II. Why are some treated and some not?  Hidden selection mechanism, based on observed (Z) and unobserved (v) factors  Then translates into 0/1 treatment

Treatment effect III. From individual to population  Average treatment effect (ATE) (randomly chosen individual)  Average treatment effect on treated (ATT) (participant of treatment)  Average treatment effect on non-treated (ATNT) Hypothetical effect - non-participant

Treatment effect IV. Heterogenous in population  Local average treatment effect (mainly in IV)  We observe variation in variable Z, which induced change in treatment status of SOME individuals  Ex.: subsidy for dormitories -> positive effect on enrollment into higher education  BUT effect that we are getting is local – only applies to people who switched their decision based on the subsidy

Identification problem Non-random assignment  Selection into treatment group =>  People who are treated are a-priori different from people who are not treated  Terminology: treatment x control group  Q: how is this different from LATE?

Identification problem Selection mechanisms  Selection on observables (corr of e and Z)  Selection on unobservables (corr of e and v)  Selection on untreated outcome (corr of d and u)  Selection on expected gains (corr of d and alpha)

Identification problem Homogenous vs. heterogenous treatment effects  Homogenous case:  Selection bias if u and d are correlated  Heterogenous case:  ATT + selection bias from corr of u and d

Overview of identification strategies  Controlled (social) experiment  Direct randomization of treated and untreated  Natural experiment  Finding naturally occurring treated and untreated group that are as similar as possible  Instrumental variable  Finding variable that affects prob. of treatment but does not affect outcome  Discontinuity design  Probability of treatment is changing discontinuously with a characteristic (eligiblity)

B. CONTROLLED EXPERIMENTS Eva Hromádková, Applied Econometrics JEM007, IES Lecture 2B

Randomization Experimentator can randomly choose which individuals are administered treatment and which not Ass.1: Treated and controls same in unobserved characteristics Ass. 2: Treated and controls same in gains from a treatment

Use of experiments and randomization  Labor Economics – Active labor market policies  Ex. National Supported Work (NSW)  Health Economics:  Ex. RAND experiment ( ) people were assigned randomly to different health insurance plans Moral hazard; effect of co-payments  *Development economics:  Educational system (Duflo, Dupaas and Kremer, 2009), microfinances (Karlan and Zinman, 2008)  *Behavioral economics  Intrinsic motivation, fairness, incentives

Development economics Improving immunization coverage in India  Video (Esther Duflo, TED Talks Feb 2010) Video Banerjee, Duflo and Kothari (2010) – Improving immunization coverage in rural India  Udaipur district, Rajasthan – very low immunization rate (4%  Reasons: Cost of travelling (immunization is for free) – procrastination  134 villages were randomized to one of 3 groups  A: reliable immunization camp  B: reliable immunization camp + incentives (lentils + plates)  No intervention  Outcome = immunization rate in villages

Development economics Results:  Baseline – 6%  A – 17%  B – 38% Issues:  Design: Testing multiple interventions  Within village correlation of individual outcomes - clustering  Spillovers – neighboring villages  Intention to treat

Behavioral economics How to combat procrastination I  Video (Dan Ariely on procrastination) Video Ariely and Wertenbroch (2002). Procrastination, deadlines and performance.  Procrastination = putting off duties/tasks  Questions: 1. Do people self-impose deadlines to increase performance if they have the possibility to do so? 2. Do deadlines increase performance? 3. Do people set deadlines optimally for maximum performance?  2 studies

Behavioral economics How to combat procrastination II Study 1: MBA course – 2 classes, requirement of 3 essays  No choice section: fixed deadlines (evenly spaced)  Free-choice section: choose deadlines themselves  Deadlines will be binding  Instructor will not read / give feedback before the end  Rational choice (if no self-control issues) = all 3 in the end  Results:  Actual choice of deadlines: only 32% for the final week  Performance: grades in no-choice section (avg = 88.76) higher than grades in choice section (avg=85.67), t=3 Problem with SE (=>t). Why?

Behavioral economics How to combat procrastination III Study 2: Proofreading, randomly assigned to 3 treatments  1. Evenly spaced submission (every 7 days)  2. End-deadline submission (at the end)  3. Self-imposed deadlines  Conditions: paid for detecting mistakes, day of delay = 1$ Results:  Participants in (3) have preferred spaced deadlines  They perform worst under no deadlines, better under self imposed deadlines and best under imposed deadlines  However, people sometimes set constraints that are not really constraining (“internal” deadlines, gym membership, etc.)

Issues in controlled experiments I Threads to internal validity:  Non-compliance:  Some people assigned to treatment do not comply  What we get is the effect of “intention to treat”  Attrition: problem if it is non-random  Externalities: not taking them into consideration reduces estimated impact of treatment  Correct SE => clustering (e.g. randomization of villages) Design questions: few examples  Framing  Relevant incentives: own / experiment money  Testing multiple interventions

Issues in controlled experiments II Threads to external validity: is the result generalizable?  Hawthorne effect: mere attention causes the treatment group to change its behavior  John Henry effect: when control group engages in social competition to show they perform as well  Demand effects: subject cooperate in ways they wouldn’t routinely consider Generally:  Population: too specific?  Time span: do we control also for long run effects?  GE effects: implementation on a large scale