The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.

Slides:



Advertisements
Similar presentations
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 23 Supplement: Local Average Treatment Effects (Chapter 15.5)
Advertisements

Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
Designing an impact evaluation: Randomization, statistical power, and some more fun…
REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Differences-in-Differences
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Correlation AND EXPERIMENTAL DESIGN
Pooled Cross Sections and Panel Data II
The Fundamental Problem of Causal Inference Alexander Tabarrok January 2007.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
Incarceration and the Transition to Adulthood Gary Sweeten Arizona State University Robert Apel University at Albany June 4, Crime and Population.
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS.
1 Comment on Zabel/Schwartz/Donald: An Analysis of the Impact of SSP on Wages Alexander Spermann Mannheim 28 October 2006.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Understanding Statistics
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Welfare Reform and Lone Parents Employment in the UK Paul Gregg and Susan Harkness.
Instrumental Variables: Problems Methods of Economic Investigation Lecture 16.
Evaluating Job Training Programs: What have we learned? Haeil Jung and Maureen Pirog School of Public and Environmental Affairs Indiana University Bloomington.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
Matching Estimators Methods of Economic Investigation Lecture 11.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 21.
Application 3: Estimating the Effect of Education on Earnings Methods of Economic Investigation Lecture 9 1.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Causal Inference: experimental and quasi-experimental methods Draft ©G. Mason (2005)
Generalizing Observational Study Results Applying Propensity Score Methods to Complex Surveys Megan Schuler Eva DuGoff Elizabeth Stuart National Conference.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Instrumental Variables: Introduction Methods of Economic Investigation Lecture 14.
What is randomization and how does it solve the causality problem? 2.3.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Randomized Assignment Difference-in-Differences
+ Experiments Experiments: What Can Go Wrong? The logic of a randomized comparative experiment dependson our ability to treat all the subjects the same.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Alexander Spermann University of Freiburg, SS 2008 Matching and DiD 1 Overview of non- experimental approaches: Matching and Difference in Difference Estimators.
ENDOGENEITY - SIMULTANEITY Development Workshop. What is endogeneity and why we do not like it? [REPETITION] Three causes: – X influences Y, but Y reinforces.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
September 2005Winterhager/Heinze/Spermann1 Deregulating Job Placement in Europe: A Microeconometric Evaluation of an Innovative Voucher Scheme in Germany.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Henrik Winterhager Econometrics III Before After and Difference in Difference Estimators 1 Overview of non- experimental approaches: Before After and Difference.
The Targeted Negative Income Tax (TNIT) in Germany: Evidence from a quasi-experiment European Econonomic Association Amsterdam, 27 August 2005 Alexander.
The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ The Fundamental Evaluation Problem and its Solution.
Looking for statistical twins
General belief that roads are good for development & living standards
An introduction to Impact Evaluation
Quasi-Experimental Methods
Introduction to Microeconometrics
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Methods of Economic Investigation Lecture 12
Introduction to Microeconometrics
European Econonomic Association Amsterdam, 27 August 2005
Matching Methods & Propensity Scores
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Counterfactual Impact Analysis applied in the ESF-Evaluation in Austria (period ) Contribution to the Expert-Hearing: Member States Experiences.
Counterfactual Analysis
Presentation transcript:

The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009

The Evaluation Problem Alexander Spermann, University of Freiburg 2 1.Evaluation Problem 2.Treatment Effects 3.Selection Bias 4.Solution Approaches Hagen, Tobias und Bernd Fitzenberger (2004), Mikroökonometrische Methoden zur Ex-post-Evaluation, in: Hagen, Tobias und Alexander Spermann, Hartz-Gesetze – Methodische Ansätze zu einer Evaluierung, ZEW-Wirtschaftsanalysen, 74, S.45-72

The Evaluation Problem Alexander Spermann, University of Freiburg 3 Example: Evaluation of programs of active labour market policy (e.g. job creation measures)  Investigation of a program‘s effect on a certain outcome variable (e.g. the employment probability)

The Evaluation Problem Alexander Spermann, University of Freiburg 4 Measurement of the program‘s success by the share of individuals entering employment during a certain period of time is not sufficient. Problem: The causal effect of the program is not measured – employment take-up could happen without program participation

The Evaluation Problem Alexander Spermann, University of Freiburg 5 Causal effect: Employment probability of participants versus employment probability of participants had they not participated Problem: „counterfactual situation“ – participants cannot simultaneously be non- participants!

The Evaluation Problem Alexander Spermann, University of Freiburg 6 Solution: Estimation of the hypothetical employment probability participants in case of non- participation by using the employment probability for non-participants. Use of a „comparison or control group“, to be able to estimate the success of participation.

The Evaluation Problem Alexander Spermann, University of Freiburg 7 In case participants and control group differ with respect to observable or non- observable characteristics that do have an influence on outcome variables Selection Bias

The Evaluation Problem Alexander Spermann, University of Freiburg 8 Potential Outcome Approach Hypothetically we could observe two outcomes for each individual: y 1 : Outcome variable in case of participation y 0 : Outcome variable in case of non-participation C : Dummy variable, set to 1 in case of participation

The Evaluation Problem Alexander Spermann, University of Freiburg 9 The actual observed outcome variable for an individual i results from: The effect of the program is: Question: What is the effect of a program on the outcome variable y?

The Evaluation Problem Alexander Spermann, University of Freiburg 10 Problem: It is not possible to calculate an individual causal effect. No individual can be in two different states of participation at the same point in time.

The Evaluation Problem Alexander Spermann, University of Freiburg 11 However, it is possible to estimate The mean effect of participation on the group of participants („Average Effect of Treatment on the Treated“ – ATT). The mean effect of participation expected for an individual drawn randomly from a population of participants and non-participants („Average Treatment Effect“ – ATE).

The Evaluation Problem Alexander Spermann, University of Freiburg 12 E[y 1 ] only is observable for participants and E[y 0 ] only is observable for non- participants. Intuition of ATT/ATE: Actual minus potential outcome of Participants

The Evaluation Problem Alexander Spermann, University of Freiburg 13 Case differentiation: 1.Participants and non-participants („control group“) differ neither with respect to observed nor to unobserved characteristics  consistent estimates of expected value of the outcome variable using the sample mean:

The Evaluation Problem Alexander Spermann, University of Freiburg 14 WhereT: Number of participants NT: Number of non- participants

The Evaluation Problem Alexander Spermann, University of Freiburg 15 2.Participants and non-participants differ in regard to observed and unobserved characteristics  Selection Bias  Difference of sample means does not lead to consistent estimators

The Evaluation Problem Alexander Spermann, University of Freiburg 16 X-Heterogeneity: Heterogeneity of the treatment effect that can be explained by differences in observed variables. U-Heterogeneity: Heterogeneity of the treatment effect that can be explained by differences in unobserved variables.

The Evaluation Problem Alexander Spermann, University of Freiburg 17 Definition homogeneous treatment effect: Treatment has the same effect on individuals with different observed attributes, i.e. no X-heterogeneity. Measure has the same effect on individuals with different unobserved attributes, i.e. no U-heterogeneity. Treatment effect is identical for all individuals and ATT=ATE.

The Evaluation Problem Alexander Spermann, University of Freiburg 18 X-Heterogeneity / U-Heterogeneity:  Heterogeneous treatment effect  Selection bias 1.Selection bias due to observed variables 2.Selection bias due to unobserved variables

The Evaluation Problem Alexander Spermann, University of Freiburg 19 „selection on observables“ „selection on unobservables“ Regression Methods „Propensity- Score- Matching“ Difference-in- Difference- Estimators (DiD) Selection Models Instrumental Variable Approaches (IV)