Download presentation
Presentation is loading. Please wait.
Published byJanessa Titcomb Modified over 9 years ago
1
BLENDING PROPENSITY SCORE MATCHING AND LOGISTIC REGRESSION IN SUPPORT SERVICE EVALUATIONS TERRENCE WILLETT, CRAIG HAYWARD, AND NATHAN PELLEGRIN CAIR CONFERENCE SAN DIEGO, NOVEMBER 2014
5
OUTCOMES Describe purpose of regression and propensity score matching (PSM) Explain data requirements and procedures for regression and PSM Compare and contrast regression and PSM Identify additional resources for further exploration
6
WHY CAUSAL INFERENCE If you need to use statistics, then you should design a better experiment. –attributed to Rutherford Most education research is observational/correlational, not experimental.
7
COMMON SCENARIO Students self-selected to participate and/or were recruited to participate. Differences between participants and non-participants, can reasonably be attributed to differences in background variables or motivation. Can we determine if the participation caused a change in outcomes? No, but… Research question: Did participation in “intervention X” result in better outcomes for students than would have happened had they not participated?
8
Classic correlational technique Covariates used in model to attempt to control for differences in background variables or motivation Background variables can include measures of or proxies for skill level, social capital, or socio-economic status Measures of self-motivation often unavailable Models are imperfect and generally must be combined with other evidence to more completely describe the possible influence of an intervention, program or strategy
9
PROPENSITY SCORE MATCHING One of several ways to create a matched comparison group of non-participants intended to be similar to participant group for a valid comparison Logistic regression or other techniques used to create a score indicating the likelihood that a particular non- participant would have been a participant based on similarity to one or more participants Resolves issue of matching on many dimensions
10
THE COUNTERFACTUAL (POTENTIAL OUTCOMES) FRAMEWORK FOR PSM Based on counterfactual theories of causation, which is a set of conceptual tools for analyzing causal claims Originated Early 70’s Introduced by philosopher David Lewis Subsequently taken up by scientists and extended (Statistician Donald Rubin at Harvard in the 80’s, and my others since) In the following slides I will: 1) Introduce some of the notation developed in this area 2) Explain the logic of PSM 3) Alternative criteria for deciding between PSM/Reg.
11
TO START….
12
THE POTENTIAL OUTCOMES MATRIX Potential Outcome Actual Treatment Status (T) 1 0 = observable = not observable
13
CAUSAL EFFECTS AT THE PROGRAM AND POPULATION LEVELS Our focus is usually not on individuals, but just about always on the aggregate effect – the average effect of a program on the outcomes of groups of individuals or populations.
14
AVERAGE TREATMENT EFFECT (ATE) BRUTE FACTS: Participants and non-participants differ systematically (w.r.t. demographics, trajectories, risk profiles, self-selection, etc.) Different people respond differently to treatment (differential response) These facts must be taken into account when modeling/ computing treatment effects. This means all four cells of the matrix must be estimated in order to obtain an average treatment effect! How? Here comes the assumptions….
15
CONDITIONAL INDEPENDENCE (PERFECT STRATIFICATION) (SELECTION ON OBSERVABLES)
16
SYMBOLIC DERIVATION OF AVERAGE EFFECTS ATE ATT ATU
17
AND NOW WE PLAY THE MATCHING GAME. ALSO KNOW AS…. Potential Outcome Actual Treatment Status (T) 1 0 = observable = not observable Y(t) Y(c) May I please borrow your outcome?
18
ESTIMATING TREATMENT EFFECTS Potential Outcome Actual Treatment Status (T) 1 AB 0C D ATE = (A + B) – (C + D) ATT = (A – B) ATU = (C – D)
19
AVERAGE TREATMENT EFFECT (ATE)
20
EXAMPLE
21
PROS AND CONS Regressions can be “easier” to run but harder to explain to a general audience PSM can be more time consuming to conduct but easier to explain to a general audience Regressions tend to perform better with large data sets while PSM tends to perform better with few observations provided the non-participant group has sufficient numbers of individuals with the key confounding variables Regressions have been used for many years and are well described mathematically with broad consensus on proper error terms PSM is newer and there is not consensus on optimal matching procedures or proper error terms Regression will use all cases with non-missing data while PSM may only a subset of cases from the pool of non-participants All analytic methods suffer if key variables are not available Conclusions can often be the same with either method
22
HOW TO RUN PSM Create data file (95% of effort) Match participants and non-participants on a set of control variables to create a comparison group with similar proportions on all characteristics (i.e. comparison group would have a similar percent female, Hispanic, low income, etc. as compared to the participant group) This step is referred to as “balancing” and generally must be repeated several times to obtain balance on all variables of interest either by adjusting matching criteria or removing variables Run comparative analyses, which can include simple t-tests, post-PSM regressions, or other techniques Major packages that conduct PSM include STATA, R, and SAS STATA version 12 and older have psmatch2, v13 has teffects psmatch http://www.ssc.wisc.edu/sscc/pubs/stata_psmatch.htm compares the two packages http://www.ssc.wisc.edu/sscc/pubs/stata_psmatch.htm Note SPSS/PASW does not do PSM directly but there is an R plugin for SPSS http://arxiv.org/ftp/arxiv/papers/1201/1201.6385.pdf
23
ALTERNATIVE PERSPECTIVES A criterion that can be applied to regression and PSM: how do they perform at predicting new observations? (false positives, false negatives, etc.) Regression and PSM methods can both be used as tools of discovery SO: CHOOSE THE METHOD/MODEL WHICH YIELDS THE SMALLEST PREDICTION ERROR. That may decide a battle in a particular setting (or occasion), but the war between methods will go on…
24
FURTHER READING Angrist, J. D., & Pischke, J. (2008). Mostly Harmless Econometrics: An Empiricists Companion Morgan, S., Harding, D. (2006) Matching Estimators of Causal Effects: From Stratification and Weighting to Practical Data Analysis Routines Caliendo and Kopeinig. 2005. Practical Guide for PSM www.caliendo.de/Papers/practical_revised_DP.pdf www.caliendo.de/Papers/practical_revised_DP.pdf Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41-55. Padgett, R.; Salisbury, M.; An, B.; & Pascarella, E. (2010). Required, Practical, or Unnecessary? An Examination and Demonstration of Propensity Score Matching Using Longitudinal Secondary Data. New Directions for Institutional Research – Assessment Supplement (pp. 29-42). San Francisco, CA: Jossey-Bass. Soledad Cepeda, M.; Boston, R.; Farrar, J., & Strom, B. (2003). Comparison of Logistic Regression versus Propensity Score When the Number of Events Is Low and There Are Multiple Confounders. American Journal of Epidemiology, 158, 280-287. http://www.biostat.jhsph.edu/~estuart/propensityscoresoftware.html
25
THANK YOU Terrence Willett Director of Planning, Research, and Knowledge Systems Cabrillo College terrence@cabrillo.edu terrence@cabrillo.edu Craig Hayward Director of Planning, Research, and Accreditation Irvine Valley College chayward@ivc.edu chayward@ivc.edu Nathan Pellegrin Director of Institutional Research Peralta District npellegrin@peralta.edu npellegrin@peralta.edu
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.