Treatment effect: Part 2

Slides:



Advertisements
Similar presentations
Introduction to Propensity Score Matching
Advertisements

REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
A workshop introducing doubly robust estimation of treatment effects
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
LT8: Matching Sam Marden Introduction Describe the intuition behind matching estimators. Be concise. Suppose you have a sample of.
Introduction to Propensity Score Weighting Weimiao Fan 10/10/
Propensity Score Matching Lava Timsina Kristina Rabarison CPH Doctoral Seminar Fall 2012.
Second order cone programming approaches for handing missing and uncertain data P. K. Shivaswamy, C. Bhattacharyya and A. J. Smola Discussion led by Qi.
Adapting to missing data
Chapter 7 – K-Nearest-Neighbor

Introduction to Hypothesis Testing CJ 526 Statistical Analysis in Criminal Justice.
BIOST 536 Lecture 9 1 Lecture 9 – Prediction and Association example Low birth weight dataset Consider a prediction model for low birth weight (< 2500.
© Institute for Fiscal Studies The role of evaluation in social research: current perspectives and new developments Lorraine Dearden, Institute of Education.
How to deal with missing data: INTRODUCTION
MACHINE LEARNING 6. Multivariate Methods 1. Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Motivating Example  Loan.
Introduction to Hypothesis Testing CJ 526 Statistical Analysis in Criminal Justice.
Chapter 8: Introduction to Hypothesis Testing. 2 Hypothesis Testing An inferential procedure that uses sample data to evaluate the credibility of a hypothesis.
Propensity Score Matching and Variations on the Balancing Test Wang-Sheng Lee Melbourne Institute of Applied Economic and Social Research The University.
Matching Estimators Methods of Economic Investigation Lecture 11.
Propensity Score Matching for Causal Inference: Possibilities, Limitations, and an Example sean f. reardon MAPSS colloquium March 6, 2007.
BIOST 536 Lecture 11 1 Lecture 11 – Additional topics in Logistic Regression C-statistic (“concordance statistic”)  Same as Area under the curve (AUC)
Univariate Linear Regression Problem Model: Y=  0 +  1 X+  Test: H 0 : β 1 =0. Alternative: H 1 : β 1 >0. The distribution of Y is normal under both.
Generalizing Observational Study Results Applying Propensity Score Methods to Complex Surveys Megan Schuler Eva DuGoff Elizabeth Stuart National Conference.
Data Mining Anomaly/Outlier Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Bayesian Multivariate Logistic Regression by Sean O’Brien and David Dunson (Biometrics, 2004 ) Presented by Lihan He ECE, Duke University May 16, 2008.
Can Mental Health Services Reduce Juvenile Justice Involvement? Non-Experimental Evidence E. Michael Foster School of Public Health, University of North.
Estimation in Marginal Models (GEE and Robust Estimation)
Impact Evaluation Sebastian Galiani November 2006 Matching Techniques.
Chapter 6 – Three Simple Classification Methods © Galit Shmueli and Peter Bruce 2008 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
Simple linear regression Tron Anders Moger
Matching STA 320 Design and Analysis of Causal Studies Dr. Kari Lock Morgan and Dr. Fan Li Department of Statistical Science Duke University.
Lecture 6: Point Interpolation
IMPORTANCE OF STATISTICS MR.CHITHRAVEL.V ASST.PROFESSOR ACN.
Using Propensity Score Matching in Observational Services Research Neal Wallace, Ph.D. Portland State University February
1 Hester van Eeren Erasmus Medical Centre, Rotterdam Halsteren, August 23, 2010.
Introduction to Machine Learning Multivariate Methods 姓名 : 李政軒.
Randomized Assignment Difference-in-Differences
Considering model structure of covariates to estimate propensity scores Qiu Wang.
Lecture 3 (Chapter 4). Linear Models for Longitudinal Data Linear Regression Model (Review) Ordinary Least Squares (OLS) Maximum Likelihood Estimation.
MATCHING Eva Hromádková, Applied Econometrics JEM007, IES Lecture 4.
Øyvind Langsrud New Challenges for Statistical Software - The Use of R in Official Statistics, Bucharest, Romania, 7-8 April 1 A variance estimation R.
Research and Evaluation Methodology Program College of Education A comparison of methods for imputation of missing covariate data prior to propensity score.
Matching methods for estimating causal effects Danilo Fusco Rome, October 15, 2012.
Looking for statistical twins
Eastern Michigan University
Survival time treatment effects
Survival-time inverse-probability- weighted regression adjustment
The comparative self-controlled case series (CSCCS)
Lurking inferential monsters
Matching Procedures: Propensity Score
Constructing Propensity score weighted and matched Samples Stacey L
Multiple Imputation using SOLAS for Missing Data Analysis
L. Elia, A. Morescalchi, G. Santangelo
Sec 9C – Logistic Regression and Propensity scores
Multiple Imputation Using Stata
Date: Presenter: Ryan Chen
Matching Methods & Propensity Scores
Simultaneous Inferences and Other Regression Topics
Matching Methods & Propensity Scores
Classification Nearest Neighbor
Methods of Economic Investigation Lecture 12
Explanation of slide: Logos, to show while the audience arrive.
Advanced quantitative methods for social scientists (2017–2018) LC & PVK Session 4 Introduction to causality: psmatch2 & teffects Plus bases on the Oaxaca.
Regression diagnostics
Matching Methods & Propensity Scores
Evaluating Impacts: An Overview of Quantitative Methods
Facultad de Ingeniería, Centro de Cálculo
Counterfactual models Time dependent confounding
Outlines Introduction & Objectives Methodology & Workflow
Presentation transcript:

Treatment effect: Part 2 Date: 2017.08.29 Presenter: Ryan Chen

REF: Introduction to treatment effects in Stata®: Part 1

REF: Introduction to treatment effects in Stata®: Part 1

Treatment-effects estimators: RA: Regression adjustment IPW: Inverse probability weighting IPWRA: Inverse probability weighting with regression adjustment AIPW: Augmented inverse probability weighting NNM: Nearest-neighbor matching PSM: Propensity-score matching REF: Introduction to treatment effects in Stata®: Part 1

Treatment-effects estimators: RA: Regression adjustment IPW: Inverse probability weighting IPWRA: Inverse probability weighting with regression adjustment AIPW: Augmented inverse probability weighting NNM: Nearest-neighbor matching PSM: Propensity-score matching estimators for the ATE that solve the missing-data problem by matching. REF: Introduction to treatment effects in Stata®: Part 1

REF: Introduction to treatment effects in Stata®: Part 1

The counterfactual outcomes are called unobserved potential outcomes. “How would the outcomes have changed had the mothers who smoked chosen not to smoke? “ The counterfactual outcomes are called unobserved potential outcomes. REF: Introduction to treatment effects in Stata®: Part 1

Prepare your dataset.

Var. Pregnancy (mbsmoke) Infant birthweight (bweight) Mother’s age (mage) Education level (medu) Marital status (mmarried) Whether the first prenatal exam occurred in the first trimester(prenatal1) Whether this baby was the mother’s first birth (fbaby) The father’s age (fage)

Matching Matching pairs the observed outcome of a person in one treatment group with the outcome of the “closest” person in the other treatment group. The outcome of the closest person is used as a prediction for the missing potential outcome.

Two points A cost to matching on continuous covariates We must specify a measure of similarity distance measures are used and the nearest neighbor selected. An alternative is to match on an estimated probability of treatment, known as the propensity score.

Nearest-neighbor matching imputes the missing potential outcome for each individual by using an average of the outcomes of similar subjects that receive the other treatment level. Similarity between subjects is based on a weighted function of the covariates for each observation. based on the inverse of the covariates’ variance–covariance matrix.

Nearest-neighbor matching ATE is the average of the difference between the observed and potential outcomes for each subject. NNM is nonparametric in that no explicit functional form needs more data to get to the true value than an estimator that imposes a functional form.

exact matching

biasadj() uses a linear model to remove the large-sample bias: more than one continuous covariate

Propensity-score matching matches on an estimated probability of treatment known as the propensity score. match on only one continuous covariate. Pro: no need for bias adjustment we can check the fit of binary regression models prior to matching

How to choose among the six estimators RA: Regression adjustment IPW: Inverse probability weighting IPWRA: Inverse probability weighting with regression adjustment AIPW: Augmented inverse probability weighting NNM: Nearest-neighbor matching PSM: Propensity-score matching RA: -277.06 IPW: -275.56 IPWRA: -229.97 AIPW: -230.99 NNM: -210.06 PSM: -229.45

Rules of thumb Under correct specification, all the estimators should produce similar results. (Similar estimates do not guarantee correct specification because all the specifications could be wrong.) When you know the determinants of treatment status, IPW is a natural base-case estimator. When you instead know the determinants of the outcome, RA is a natural base-case estimator. The doubly robust estimators, AIPW and IPWRA, give us an extra shot at correct specification.

Rules of thumb When you have lots of continuous covariates, NNM will crucially hinge on the bias adjustment, and the computation gets to be extremely difficult. When you know the determinants of treatment status, PSM is another base-case estimator. The IPW estimators are not reliable when the estimated treatment probabilities get too close to 0 or 1.