Michigan Team February, 2005
Amy Wagaman Bibhas Chakraborty Herle McGowan Susan Murphy Lacey Gunter Danny Almirall Anne Buu
Outline Overview Danny Almirall Anne Buu Bibhas Chakraborty Lacey Gunter Herle McGowan Susan Murphy Amy Wagaman New Additions!
Primary Focus Drug Dependence Prevention and Treatment Time-varying Treatments Development of Designs and Methodology that informs the construction of adaptive treatment strategies. –Chronic, relapsing disorder requires sequencing and timing decisions –High dimensional problems –Treatments are multi-component
Danny Almirall Structural Nested Mean Models for Assessing Time-Varying Effect Moderation Danny’s research is funded by a Rackham Fellowship; collaborates with Tom Ten Have and Susan.
Effect Moderation Two time points: S 1, a 1, S 2 (a 1 ), a 2, Y(a 1, a 2 ). a’s are time-varying treatments; S’s are time- varying observations; Y is response. – 2 (a 2,s 2 )=effect of treatment a 2 on response Y(a 1,.) given S 1 =s 1, S 2 (a 1 )=s 2. – 1 (a 1,s 1 )=effect of treatment a 1 on response, Y(.,0), given S 1 =s 1. 1 and 2 describe the intermediate effects of a 1 and a 2, respectively, given the past. They describe time-varying effect moderation. No effect moderation ’s are constant in S’s.
The Structural Nested Mean Model The SNMM provides a way to combine 1 and 2 in a model for the conditional mean of Y(a 1,a 2 ) given S 1 and S 2 (a 1 ). One can then pose parametric models for 1 and 2. The challenge is that the SNMM depends, additionally, on non-causal nuisance functions related to the “main effects” of the S’s.
Estimating the SNMM Semi-parametric estimators exist that do not require positing models for the nuisance functions, but are they too variable? Estimators that include models for the nuisance functions may be less variable. But positing mis-specified models for the nuisance function may induce bias in the estimates of the treatment effects. We wish to study the bias-variance tradeoff.
Anne Buu Beginning work on a K01 application to NIDA Anne’s research is funded by Bob Zucker in the Psychiatry Department
Ideas so far Use smoothing techniques to analyze data from the University of Michigan-Michigan State Longitudinal Family Study. (Use functional data analyses and also ideas by Rice in 2004.) Construct child development profiles by family types Model impact of family factors on child development profiles
U of M-MSU Family Study An eighteen year prospective study Each family was recruited as a triad: alcoholic father, biological mother and 3-5 year old son. Each member was assessed on psychological, social, cognitive, and substance use every 3 years. During adolescence (ages 11-18), the target son was assessed annually. At later time points, siblings were also assessed.
Bibhas Chakraborty Design Strategies for Behavioral Intervention Research: A Simulation Study Bibhas’ research is funded by Vic Strecher’s NCI P-50; Collaborates with Vijay Nair, Vic Strecher, Linda Collins and Susan.
The Goal Evaluate Designs for optimizing a multi- component behavioral treatment. In the present study, we are comparing the classical randomized trial approach vs. multistage experimental approach (MOST using balanced fractional factorials). We want to identify situations where one approach is distinctly better than the other.
The Set-up / Simulation Design The true data-generating model is a complete mediation model, with 6 factors, 6 adherence variables, 6 mediators and the response. Unknown component: Type (binary). All but one factors are binary. Use of scientist’s prior knowledge in both classical and experimental approach.
The Classical Approach Two-group comparison of treatment vs. control (Kitchen-sink approach) on the entire set of subjects. Post hoc analysis to: - investigate the reason of insignificance. - refine the significant treatment combination. Post hoc analysis is either dose-response analysis or regression with mediators as intermediate outcome.
The Experimental Approach Divide available subjects into two groups and conduct two experimental stages: Screening and Refining. Screening Stage: Use Fractional Factorial Design to estimate main effects and interactions. Refining Stage: - Finding best level of a non-binary factor. - Settling confusion about significant (aliased) interactions, if necessary. Thus optimizing treatment proactively
Lacey Gunter Comparing Reinforcement Learning Algorithms Lacey’s research is funded by this center; she works with Susan.
Reinforcement Learning What is it? Learning through interaction with your environment The general idea: Based on present information and past experience, our goal is to choose actions which achieve the best long term results Example: Based on past adherence and responses to treatment and the current health status of a patient, what actions should we take to minimize a patient’s long term drug dependence.
Reinforcement Learning Algorithms Several algorithms exist for choosing the best set of actions, we are studying two such algorithms Q-learning: model outcome based on past experience, current action and the interaction between past experience and current action A-learning: model outcome only based on current action and the interaction between past experience and current action
We conjecture: Q-learning tends to be more biased but less variable A-learning tends to be less biased but more variable We are studying which algorithm performs better in terms of bias-variance trade off in different settings These tradeoffs are altered when we use bagging.
Herle McGowan Causal Inference & Data Collection Herle’s research is funded by this center; she collaborates with Rob Nix, Karen Bierman and Susan.
Data Collection Data collection methods involving clinical judgment that can result in confounding Data collection using a regression discontinuity design combined with clinical judgment
Using clinical judgment to readjust dose throughout trial Causes confounding when we want to run a dose response. Currently Herle is writing a paper discussing this issue and illustrating the kinds of biases that can occur.
Regression Discontinuity Regression discontinuity (RD) design. –All subjects scoring above a cutoff value are assigned to one treatment condition while all subjects scoring below are assigned to a second treatment condition.
Regression Discontinuity Designs & Clinical Judgment Herle’s thesis will be on this topic; she has already completed much of the literature review and background work.
Susan Murphy Experimental Design and Analysis Methods for Developing Adaptive Treatment Strategies Susan’s research is funded by this center, a K02, an R21 & Vic Strecher’s P-50.
Projects Experimental designs for developing treatment strategies with Derek Bingham and Linda Collins Formulating less biased methods for constructing treatment strategies. Using system dynamics models to inform the construction of treatment strategies with Joelle Pineau, John Rush and Satinder Baveja.
Projects Working on a white paper concerning the methodological challenges in developing treatment strategies with Dave Oslin, John Rush and Satinder Baveja. Writing a grant (right now) with Vic Strecher, Caroline Richardson and Satinder Baveja to develop a prevention program designed to increase and maintain activity.
Amy Wagaman Constructing Tailoring Variables for Decision Making Amy’s research is funded by this center; she collaborates with Jim McKay, Ji Zhu and Susan.
Developing a Tailoring Variable The primary objective is to develop a summary variable that can discriminate between patients in terms of assigning a treatment. This summary variable is constructed from individual characteristics and behavior on past treatment. Tools are PLS and combining this with binary responses.
Motivation Consider a substance abuse aftercare program, where the goal is to prevent patient relapse. Assume that there is a treatment effect, and a treatment interaction with a variable that measures counseling attendance during prior acute treatment. Perhaps highly motivated patients would benefit more from one treatment on average, while patients with low motivation would benefit more from another.
Challenges The summary variable must be constructed from data that lives in a high dimensional space, i.e. dimension reduction methods need to be used. Subject-based knowledge about what variables might be more useful should be taken into account. There is a danger that detected interactions may be spurious.
Additions! Pawel Mierzejewski (beginning work with Satinder Baveja, John Rush and Susan on feature construction)