G Lecture 71 Revisiting Hierarchical Mixed Models A General Version of the Model Variance/Covariances of Two Kinds of Random Effects Parameter Estimation Details Examples of Alternative Models
G Lecture 72 Revisiting the Hierarchical Mixed Models Last week we considered a two level model for the j th person at the i th time. Let Y be some outcome, X some time-dependent process (such as measure of time) and D be some between-person variable. LEVEL 1: Y ij = 0j + 1j X ij + r ij LEVEL 2 0j = 00 + 01 D j + U 0j 1j = 10 + 11 D j + U 1j The combined model is Y ij = 00 + 10 X ij + 01 D j + 11 (X ij *D j )+U 0j +U 1j X ij +r ij
G Lecture 73 Two Kinds of Random Effects U 0j and U 1j are assumed to be normally distributed with mean zero and covariance matrix Last week we assumed that r ij was iid Normal with mean zero and variance We assumed that the Level 1 model accounted for within- person dependency of scores.
G Lecture 74 A More General Model We don't have to assume that the r ij terms are uncorrelated within person. Suppose we write a vector of within person variables over n time points as where Y j is the n 1 list of outcomes, W j is the n 4 array of codes for the fixed effects, and Z j is the array of codes for the random effects. The fixed effect coefficients are collected in the vector , and the random effect coefficients are collected in U j. The residuals are in a list r j.
G Lecture 75 A Numerical Example Suppose that we have a person from group 0 measures at times 0,1,2 with scores for Y, 2.329, 3.449, and We write
G Lecture 76 More on the General Model W j contains constants that say which of the fixed effect constants ( ) apply to Y j Z j contains information relating the random effects (U j ) to the set of Y ij values in Y j r j contains the residuals from the within-person linear fit. Var(U j ) = T Var(r j ) = R Var(Y j ) = [Z j TZ T j + R] = V
G Lecture 77 Thinking about Var(r j ) = R The matrix R is the expected covariance among the residuals, after taking into account the fixed effects ( ) and the random regression effects (U j ). If the residuals are uncorrelated with common variance R = I (a diagonal matrix) This was implicitly assumed so far. Other structures for R can be considered, such as autoregressive, AR(1), or moving average, MA(1)
G Lecture 78 The General Model: Sample Level Suppose we string the Y j vectors together:
G Lecture 79 A Numerical Example ---W---- Z Y
G Lecture 710 Estimation Using General Framework If the observations were all independent, the general equation would be and the OLS estimates of the fixed regression effectes, would be
G Lecture 711 Estimation Using General Framework In the general case, the estimates of the fixed effects need to take into account both the random effects and the correlated residuals. If the matrices T and R were known, we would have where Var(Y j ) = [Z j TZ T j + R] = V
G Lecture 712 Interative Solution in General Case PROC MIXED 1) estimates the fixed effects using ordinary least squares 2) estimates T and R using ML or REML on residuals 3) estimates fixed effects using weighted least squares 4) re-estimates T and R using ML/REML and so on
G Lecture 713 Theory and Practice In theory we can estimate both Var(U)=T and Var(r)=R In practice, the estimates need to be structured. Consider two patterns for R
G Lecture 714 Simulated Data Example
G Lecture 715 PROC MIXED Syntax filename myimport 'c:\mixedex.por'; proc convert spss=myimport out=sasuser.mixedex; TITLE1 'SMALL SIMULATED RANDOM COEFFICIENTS DATA'; run; proc mixed covtest; class sub; model y=g time g*time /s; random intercept / subject=sub g gcorr type=un; REPEATED /TYPE=AR(1) SUBJECT=SUB R RCORR; TITLE2 'RANDOM INTERCEPT/SLOPE W CORRELATED RESIDUALS'; run;
G Lecture 716 Bar Anxiety Study: Estimating R as AR(1) To Estimate R, we include a REPEATED statement in PROC MIXED PROC MIXED NOCLPRINT COVTEST METHOD=REML; CLASS id; MODEL anx=group week group*week /s; RANDOM intercept week /SUBJECT=id type=un gcorr; REPEATED /TYPE=AR(1) SUBJECT=ID R RCORR; TITLE2 '2RANDOMEFFECTS: ASSUMES RESIDUALS HAVE AR(1) CORR PATTERN';
G Lecture 717 Iteration Results Iteration History Iteration Evaluations -2 Res Log Like Criterion Convergence criteria met.
G Lecture 718 Random Effect Estimates Estimated R Correlation Matrix for id 1 Row Col1 Col2 Col3 Col Estimated G Correlation Matrix Row Effect id Col1 Col2 1 Intercept week
G Lecture 719 Random Effects Continued Covariance Parameter Estimates Standard Z Cov Parm Subject Estimate Error Value Pr Z UN(1,1) id UN(2,1) id UN(2,2) id 1.06E AR(1) id <.0001 Residual <.0001
G Lecture 720 Fixed Effects for AR(1) Model Solution for Fixed Effects Standard Effect Estimate Error DF t Value Pr > |t| Intercept <.0001 group <.0001 week <.0001 group*week <.0001
G Lecture 721 Compare to Model Assuming R=I Fixed Effects are quite similar
G Lecture 722 Comparing Alternative Models Two tools are useful AIC: Akaike's Information Criterion Based on the log likelihood, but adjusts for number of parameters that are estimated (the large the number of estimates, the smaller the log likelihood) Can be compared across REML runs Closer to zero is better -2*Log Likelihood comparisons Useful for nested models Should be estimated using ML rather than REML
G Lecture 723 Example: R=I vs R=AR(1) Original analysis that assumed that R=I AIC from REML analysis = 784 -2LL from ML = New analysis with R estimated with AR(1) AIC from REML analysis = 772 -2LL from ML = One new parameter estimate reduced –2LL by = 5.6. Chi Square on 1df is significant. AR(1) appears to be better model in terms of fit Random effect for slope goes away! Fixed effect results are very similar