A workshop introducing doubly robust estimation of treatment effects Michele Jonsson Funk, PhD UNC/GSK Center for Excellence in Pharmacoepidemiology University of North Carolina at Chapel Hill
Conflict of Interest Statement Macro development funded by the Agency for Healthcare Research and Quality via a supplemental award to the UNC CERTs (U18 HS10397-07S1) Additional support from the UNC/GSK Center for Excellence in Pharmacoepidemiology and Public Health. No potential conflicts of interest with respect to this work.
Regression models assume that… The parametric form is correct. Should we use logistic regression, or log- binomial? We have included correct predictors. Should we really include age in this model? Those predictors have been specified correctly. Should age be coded continuously or in 10 year categories? Is there an interaction with race? What about higher order terms? Etc…
What if the model is wrong? Lunceford & Davidian, Stat Med, 2004 Omit a true confounder (extreme example) True relationships known (simulated data) Vary associations between Risk factor – outcome Confounder – exposure
ML outcome regression: false model %bias Risk factor – outcome assn Lunceford & Davidian, Stat Med, 2004
Doubly robust (DR) estimator: false model for outcome regression %bias Risk factor – outcome assn Lunceford & Davidian, Stat Med, 2004
ML outcome regression: true model CI Coverage Risk factor – outcome assn Lunceford & Davidian, Stat Med, 2004
DR: true models for propensity score & outcome regression CI Coverage Risk factor – outcome assn Lunceford & Davidian, Stat Med, 2004
ML outcome regression: false model CI Coverage Risk factor – outcome assn Lunceford & Davidian, Stat Med, 2004
DR: true model for propensity score & false model for outcome regression CI Coverage Risk factor – outcome assn Lunceford & Davidian, Stat Med, 2004
Doubly robust (DR) estimation from 30,000 feet Robins & colleagues recognized the doubly robust property in mid-90’s Combines standardization (or reweighting) with regression Part of the family of methods that includes propensity scores and inverse probability weighting
Conceptual description Doubly robust (DR) estimation uses two models: Propensity score model for the confounder - exposure (or treatment) relationship Outcome regression model for the confounder – outcome relationship, under each exposure condition These two stages can use: different subsets of covariates, and different parametric forms. If either model is correct, then the DR estimate of treatment effect is unbiased.
Risk factors (potential confounders) Propensity Score Model (1) Two stages Risk factors (potential confounders) Exposure (Treatment) Outcome Propensity Score Model (1) Outcome Regression (2)
Causal effect of interest Comparing counterfactual scenarios E(Y1): Whole population treated (exposed) vs. E(Y0): Whole population untreated (unexposed) Average causal effect of treatment E(Y1) – E(Y0) : difference E(Y1) / E(Y0) : ratio In non-randomizes studies, the unexposed may not fairly reflect what would have happened to the exposed had they been unexposed (confounding)
Doubly robust estimator Y: outcome Z: binary treatment (exposure) X: baseline covariates (confounders plus other prognostic factors) e(X,β): model for the true propensity score m0(X,α0) and m1(X,α1): regression models for true relationship between covariates and the outcome within each strata of treatment Causal effect of interest (deltaDR): difference in mean response if everyone in the population received treatment versus everyone receiving no treatment; E(Y1)-E(Y0). ΔDR = E(Y1) - E(Y0) Adapted from Davidian M, DR Presentation, 2007
Doubly robust estimator E(Y1): average popn response with treatment / exposure Adapted from Davidian M, DR Presentation, 2007
Average population response with treatment (μ1,DR) IPTW Estimator “Augmentation” Adapted from Davidian M, DR Presentation, 2007
True PS model; false regression model (I) Propensity score model Regression model Adapted from Davidian M, DR Presentation, 2007
True PS model; false regression model (II) Assuming no unmeasured confounders ! Adapted from Davidian M, DR Presentation, 2007
False PS model; true regression model (I) Propensity score model Regression model Adapted from Davidian M, DR Presentation, 2007
False PS model; true regression model (II) Assuming no unmeasured confounders ! Adapted from Davidian M, DR Presentation, 2007
Overly simplified statistics ΔDR = [E(Y1) + junk] - [E(Y0) + junk] Where junk = 0 if either the propensity score or the regression model is true… ΔDR = E(Y1) - E(Y0) Adapted from Davidian M, DR Presentation, 2007
Standard errors Option 1: Sandwich estimator Option 2: Bootstrap Adapted from Davidian M, DR Presentation, 2007
Simulation findings Bang & Robins 2005 N=500, 1000 iterations False propensity score model 1 of 4 true predictors of tx 1 ‘noise’ variable, independent of tx False outcome regression model Omit one risk factor, an higher order term and an interaction term
Bias under false models Analysis Method True Model(s) False Model PS OR Both -0.01 0.86 0.00 -1.56 DR -0.09 0.92 H Bang & JM Robins, Biometrics (2005).
Variance under false models Analysis True Model False Model PS OR Both 0.21 0.15 0.07 DR 0.09 0.08 0.28 H Bang & JM Robins, Biometrics (2005).
Recapping L&D simulations Compare performance of propensity score analysis, IPW, outcome regression (OR) and DR Omit a true confounder (extreme example) True relationships known (simulated data) Vary associations between Risk factor – outcome Confounder – exposure Vary sample size
If all models are true… Bias <3% for all methods except for PS analysis using strata (due to residual confounding) Variance similar in general VarOR < VarDR (slightly) if confounder-exp relationship is strong VarDR < VarIPW If OR model is right, most efficient. But we have no way of knowing whether or not it’s right. Lunceford & Davidian, Stat Med, 2004
If outcome regression model is false… Bias DR always <1%; OR biased by 10-20% in most scenarios Efficiency DR nearly as efficient as correct model except when conf-exp relationship strong DR always more efficient than IPW Confidence interval coverage DR coverage nominal ML coverage poor Adding risk factors to PS model improves precision If both are nearly right (only a little wrong), bias is small Lunceford & Davidian, Stat Med, 2004
Discussion If method offers some protection against model misspecification, why isn’t it being used by pharmacoepidemiologists?
SAS macro for DR estimation Objectives Facilitate wider use of DR estimation Improve performance by implementing sandwich estimator for SEs Enhance usability by following SAS conventions Provide user with relevant diagnostic details
http://www.harryguess.unc.edu SAS macro for doubly robust estimation including documentation Dataset for sample analyses (1.7MB, optional)
Running the DR macro By design, the DR macro uses common SAS® syntax for specifying the source dataset, variables for modeling, and additional options: %dr(%str(options data=SAS-data-set descending; wtmodel exposure = x y z / method=dr dist=bin showcurves; model outcome = x y z / dist=n ; ) );
Running the DR macro %dr(%str(options data=SAS-data-set descending; wtmodel exposure = x y z / method=dr dist=bin showcurves; model outcome = x y z / dist=n; ) );
Running the DR macro %dr(%str(options data=SAS-data-set descending; wtmodel exposure = x y z / method=dr dist=bin showcurves; model outcome = x y z / dist=n; ) );
Running the DR macro %dr(%str(options data=SAS-data-set descending; wtmodel exposure = x y z / method=dr dist=bin showcurves; model outcome = x y z / dist=n; ) );
DR macro: output Propensity score (wtmodel) results Descriptive statistics for weights Graph of propensity score curves by exposure status Reweighted regression model among the unexposed (dr0) Reweighted regression model among the exposed (dr1) Doubly robust estimate and standard error
DR macro: output average response had all been unexposed, adjusted for risk factors average response had all been exposed, adjusted for risk factors n used in the analysis. usedobs<totalobs due to missing data or use of common support option dr1 – dr0; difference in mean response for continuous outcome; risk difference for dichotomous outcome n in dataset SE of deltaDR Obs totalobs usedobs dr0 dr1 deltadr se 1 100000 79292 .005546853 0.034117 0.028570 .002026204
Example analysis CVD Outcomes Continuous: CVD score (i.e. LDL) Binary: acute MI Exposure (treatment): statin use (yes/no) 50% of population exposed 10 covariates (5 continuous, 5 binary) Data are simulated, so true relationships among exposure, covariates & outcome are known
Example analysis %dr(%str(options data=final descending; wtmodel statin=hs smk hxcvd black age bmi exer chol income / method=dr dist=bin showcurves common_support=.99; model cvdscore=hs female smk hxcvd age age2 bmi bmi2 exer chol / dist=n; ));
Propensity scores from ‘showcurves’ option Unexposed Exposed Propensity Score Unexposed Unexpose
Results from sample analysis Effect Estimates Result %bias SE True -1.099 Crude 1.869 270.0% 0.089 Maximum likelihood -1.089 0.9% 0.023 Doubly robust PS model Outcome model Correct -1.102 -0.3% 0.025 Incorrect -1.117 -1.7% -1.093 0.5% 0.022 0.397 136.1% 0.049
Validation: simulation methods Draw random sample (n) from simulated population Vary n from 100 to 5000 Estimate doubly robust effect of treatment and standard error Repeat 1000 times
Continuous outcome
Continuous outcome
Continuous outcome
Dichotomous outcome
Dichotomous outcome
Dichotomous outcome
Caveats SEs conservative when sample size is small; bootstrapping may be used in this case to get more appropriate SEs Macro only provides difference estimates (not RR or OR) for now Exposure must be dichotomous; outcome must be continuous or dichotomous (time-to-event analysis not supported) Some SAS conventions not recognized within the macro code where and class statements not recognized interaction terms and higher order polynomials must be created in a prior data step
Practical considerations How to choose which covariates to include? Good question. Based on simulations from PS literature Include all risk factors for outcome May omit predictors of tx that do not affect outcome
Practical considerations What to do with estimates from various models that differ? Effect Estimates Result %bias SE Crude 1.90 ? 0.089 Maximum likelihood -1.09 0.023 Propensity score -1.50 0.050 Doubly robust -1.12 0.024 III. SAS Macro
Practical considerations What sort of diagnostics should be checked? Potentially influential obs with extreme PS values ‘common_support’ option in SAS macro Distribution of PS scores stratified by treatment / exposure group ‘showcurves’ option in SAS macro
Checking PS distribution Strata 1 2 3 4 5 6 Tx=0 Tx=1 0 0.5 1 Propensity score
Checking PS distribution Strata 1 2 3 4 5 6 Tx=0 Tx=1 0 0.5 1 Propensity score
Checking PS distribution Strata 1 2 3 4 5 6 Tx=0 Tx=1 0 0.5 1 Propensity score
Limitations DR estimation is not a panacea for unmeasured confounding. Recall- ‘junk’ only reduces to 0 with assumption of no unmeasured confounders One of the models must be correct for the estimator to be unbiased Bang & Robins suggest that it will be minimally biased if both models are nearly right… Standard errors tend to be slightly larger compared to a single correctly specified regression model Explaining DR estimation in your methods section could be interesting…
Applications DR estimation potentially valuable for comparative effectiveness studies, and in particular for head-to-head comparisons of treatment effectiveness or adverse events from observational data when RCTs can’t or won’t be done... for ethical reasons, for economic reasons, for reasons of rare or late-effect outcomes, or for reasons of the need to conduct faster analyses of possible sentinel events
Extensions Missing data Longitudinal marginal structural models Incomplete follow-up in RCTs Longitudinal marginal structural models Goodness of fit test?
Summary Observational studies of treatment effects depend on statistical models to disentangle causal effects from confounding We can never be certain that the statistical model we have chosen is correct DR estimate unbiased if at least one of the two component models is right and therefore provides some protection against model misspecification The ‘price’ of double robustness is slightly larger standard errors than a single correctly specified regression model Assumption of no unmeasured confounders required
References Bang, H. & J.M. Robins: Doubly-robust estimation in missing data and causal inference models. Biometrics 2005, 61, 962–973. Lunceford, J. K. and Davidian, M. (2004). Stratification and weighting via the propensity score in estimation of causal treatment effects: A comparative study. Statistics in Medicine 23, 2937–2960. Robins, J. M. (2000). Robust estimation in sequentially ignorable missing data and causal inference models. Proceedings of the American Statistical Association Section on Bayesian Statistical Science, 6–10. Robins, J. M., Rotnitzky, A., and Zhao L. P. (1994). Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association 89, 846–866. Rotnitzky, A., Robins, J. M., and Scharfstein, D. O. (1998). Semiparametric regression for repeated outcomes with nonignorable nonresponse. Journal of the American Statistical Association 93, 1321–1339. Scharfstein, D. O., Rotnitzky, A., and Robins, J. M. (1999). Adjusting for nonignorable drop-out using semiparametric nonresponse models. Journal of the American Statistical Association 94, 1096–1120 (with Rejoinder, 1135–1146). Van der Laan, M. J. and Robins, J. M. (2003). Unified Methods for Censored Longitudinal Data and Causality. New York: Springer-Verlag.
Acknowledgements Collaborators on the development of the SAS macro: Chris Wiesen, PhD, Odum Institute for Research in Social Science, University of North Carolina, Chapel Hill, NC Daniel Westreich, MSPH, Department of Epidemiology, University of North Carolina, Chapel Hill, NC Marie Davidian, PhD, Department of Statistics, North Carolina State University, Raleigh, NC
Acknowledgements (II) Agency for Healthcare Research and Quality Supplemental Award to the UNC CERTs (U18 HS10397-07S1) UNC/GSK Center for Excellence in Pharmacoepidemiology and Public Health Kevin Anstrom, Lesley Curtis, Brad Hammill, and Rex Edwards from the Duke CERTs team for valuable feedback on the alpha version. Thanks to students from UNC’s EPID 369/730, a causal modeling course, for valuable feedback on the beta version. Presented in memory of Harry Guess, MD, PhD, 1940-2006, who co-authored the initial proposal to develop a SAS macro for doubly robust estimation.
Contact Information Michele Jonsson Funk, PhD Research Assistant Professor Department of Epidemiology University of North Carolina Chapel Hill NC 27599-7521 mfunk@unc.edu 919-966-8431 (ph) 919-843-3120 (fax) http://www.harryguess.unc.edu
Questions & Discussion