A Gentle Introduction to Linear Mixed Modeling and PROC MIXED

Slides:



Advertisements
Similar presentations
A Gentle Introduction to Linear Mixed Modeling and PROC MIXED
Advertisements

An Introduction to Group-Based Trajectory Modeling and PROC TRAJ Richard Charnigo Professor of Statistics and Biostatistics Director of Statistics and.
Inference for Regression
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
Correlation and regression Dr. Ghada Abo-Zaid
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Lecture 6 (chapter 5) Revised on 2/22/2008. Parametric Models for Covariance Structure We consider the General Linear Model for correlated data, but assume.
Copyright © 2009 Pearson Education, Inc. Chapter 8 Linear Regression.
Chapter 8 Linear Regression © 2010 Pearson Education 1.
Some Terms Y =  o +  1 X Regression of Y on X Regress Y on X X called independent variable or predictor variable or covariate or factor Which factors.
LINEAR REGRESSION: Evaluating Regression Models Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Assumptions for Linear Regression Evaluating a Regression Model.
Multiple regression analysis
LINEAR REGRESSION: Evaluating Regression Models. Overview Standard Error of the Estimate Goodness of Fit Coefficient of Determination Regression Coefficients.
The Simple Linear Regression Model: Specification and Estimation
Chapter 10 Simple Regression.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 11 th Edition.
6.4 Prediction -We have already seen how to make predictions about our dependent variable using our OLS estimates and values for our independent variables.
Multiple Regression Research Methods and Statistics.
Copyright ©2011 Pearson Education 15-1 Chapter 15 Multiple Regression Model Building Statistics for Managers using Microsoft Excel 6 th Global Edition.
Objectives of Multiple Regression
Introduction to Multilevel Modeling Using SPSS
Introduction to Linear Regression and Correlation Analysis
Inference for regression - Simple linear regression
Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall 15-1 Chapter 15 Multiple Regression Model Building Statistics for Managers using Microsoft.
© 2002 Prentice-Hall, Inc.Chap 14-1 Introduction to Multiple Regression Model.
CORRELATION & REGRESSION
Statistics for clinicians Biostatistics course by Kevin E. Kip, Ph.D., FAHA Professor and Executive Director, Research Center University of South Florida,
Biostatistics Case Studies 2008 Peter D. Christenson Biostatistician Session 5: Choices for Longitudinal Data Analysis.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 8 Linear Regression.
MBP1010H – Lecture 4: March 26, Multiple regression 2.Survival analysis Reading: Introduction to the Practice of Statistics: Chapters 2, 10 and 11.
University of Warwick, Department of Sociology, 2014/15 SO 201: SSAASS (Surveys and Statistics) (Richard Lampard) Week 7 Logistic Regression I.
Lecture 8 Simple Linear Regression (cont.). Section Objectives: Statistical model for linear regression Data for simple linear regression Estimation.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
Generalized Linear Mixed Modeling and PROC GLIMMIX Richard Charnigo Professor of Statistics and Biostatistics Director of Statistics and Psychometrics.
Lecture 10: Correlation and Regression Model.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide 8- 1.
Dr. Richard Charnigo Professor of Statistics and Biostatistics 08 and 11 January 2016.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 10 th Edition.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
1 Statistics 262: Intermediate Biostatistics Regression Models for longitudinal data: Mixed Models.
G Lecture 71 Revisiting Hierarchical Mixed Models A General Version of the Model Variance/Covariances of Two Kinds of Random Effects Parameter Estimation.
Dr. Richard Charnigo Professor of Statistics and Biostatistics 07 December 2015.
Correlation & Simple Linear Regression Chung-Yi Li, PhD Dept. of Public Health, College of Med. NCKU 1.
Multiple Regression.
Inference about the slope parameter and correlation
Chapter 15 Multiple Regression Model Building
The simple linear regression model and parameter estimation
Regression Analysis AGEC 784.
Statistical Data Analysis - Lecture /04/03
Multiple Regression.
(Residuals and
Simple Linear Regression - Introduction
Correlation and Simple Linear Regression
I271B Quantitative Methods
Chapter 8 Part 2 Linear Regression
Diagnostics and Transformation for SLR
CHAPTER 29: Multiple Regression*
Multiple Regression.
Prepared by Lee Revere and John Large
No notecard for this quiz!!
Correlation and Simple Linear Regression
Simple Linear Regression and Correlation
Diagnostics and Transformation for SLR
Nazmus Saquib, PhD Head of Research Sulaiman AlRajhi Colleges
MGS 3100 Business Analysis Regression Feb 18, 2016
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

A Gentle Introduction to Linear Mixed Modeling and PROC MIXED Richard Charnigo Associate Professor of Statistics and Biostatistics Director of Statistics and Psychometrics Core, CDART RJCharn2@aol.com

Objectives First hour: 1. Be able to formulate linear mixed models for longitudinal data involving a categorical and a continuous covariate. 2. Understand how linear mixed modeling goes beyond linear regression and repeated measures ANOVA. Second hour: 3. Be able to use PROC MIXED to fit a linear mixed model for longitudinal data involving a categorical and a continuous covariate.

Motivating example The Excel file at {www.richardcharnigo.net/mixed} contains a simulated data set: Two hundred college freshmen (“ID”) who drink alcohol are asked to estimate the number of drinks consumed during the preceding year. From this number we obtain an estimate of the average weekly number of drinks (“Drink”). The students are also assessed on negative urgency; the results are expressed as Z scores (“NegUrg”). One and two years later (“Time”), the students supply updated estimates of their drinking.

Motivating example Two obvious “research questions” are: Is there an association between negative urgency and drinking at baseline ? Does drinking tend to change over time and, if so, is the change predicted by negative urgency at baseline ? Of course, we can envisage more complicated and realistic scenarios ( e.g., with additional personality variables and/or interventions ), but this simple scenario will help us get a hold of linear mixed modeling and PROC MIXED.

Exploratory data analysis Before pursuing linear mixed (or other statistical) modeling, we are well-advised to engage in exploratory data analysis. This can alert us to any gross mistakes in the data set, heretofore undetected, which may compromise our work. This can also suggest a structure for the linear mixed model and help us to anticipate what the results should be.

Exploratory data analysis

Exploratory data analysis

Exploratory data analysis

Exploratory data analysis

Exploratory data analysis The scatterplots suggest the following: There are some outlying values, and drinking is not normally distributed, but there are not any values that are obviously fabricated or miskeyed. There appears to be a positive association between drinking and negative urgency at baseline, which strengthens over time as those higher in negative urgency seem to be drinking more in later years. The latter impression is also conveyed by the plot of means and standard errors.

A first linear mixed model We will log-transform drinking before fitting any linear mixed models, since linear mixed modeling assumes approximate normality of the outcome variable at fixed values of the predictor variables. Hereafter let Yjk denote subject j’s log-transformed drinking score at time k. Consider these three equations: Yjk = a0 + a1 k + error, if subject j is low Yjk = b0 + b1 k + error, if subject j is average Yjk = c0 + c1 k + error, if subject j is high on negative urgency.

A first linear mixed model Three comments are in order: First, we are in essence regressing (log-transformed) drinking on time but allowing each subject to have one of three intercepts and one of three slopes, according to his/her negative urgency. Second, our research questions amount to asking whether a0 , b0 , c0 differ from each other, whether a1 , b1 , c1 differ from zero, and whether a1 , b1 , c1 differ from each other.

A first linear mixed model Third, the linear mixed model defined by the three equations can be expressed as a linear regression model. Let X1 and X2 respectively be dummy variables for low and high negative urgency. Then we may write Yjk = b0 + (a0 – b0) X1j + (c0 – b0) X2j + ( b1 + (a1 – b1) X1j + (c1 – b1) X2j ) k + error.

A first linear mixed model Now let us examine the results from fitting the linear mixed model using PROC MIXED. We see that PROC MIXED used all available observations ( 540 ), including observations from subjects who dropped out early ( 60 ). Along with accommodating a continuous covariate ( time ), this is why linear mixed modeling goes beyond a standard repeated measures ANOVA. Number of Observations Number of Observations Read 540 Number of Observations Used Number of Observations Not Used

A first linear mixed model The variance of the error term is estimated to be 0.41. The estimates of the intercepts a0 , b0 , c0 are 0.87, 1.05, and 1.16. The estimates of the slopes a1 , b1 , c1 are -0.13, 0.08, and 0.27. Covariance Parameter Estimates Cov Parm Estimate Standard Error Z Value Pr > Z Residual 0.4126 0.02525 16.34 <.0001 Solution for Fixed Effects Effect negurgstratum Estimate Standard Error DF t Value Pr > |t| 0.8726 0.08812 534 9.90 <.0001 1 1.0536 0.05869 17.95 2 1.1605 0.08143 14.25 Time*negurgstratum -0.1309 0.07166 -1.83 0.0683 0.08094 0.04769 1.70 0.0902 0.2688 0.06580 4.08

A first linear mixed model We can also use PROC MIXED to estimate any linear combinations of a0 , b0 , c0 , a1 , b1 , c 1. For example, below are estimates of c0 – a0 ( high vs. low negative urgency freshmen ), ( c0 + c1 ) – ( a0 + a1 ) ( high vs. low negative urgency sophomores ), and ( c0 + 2c1 ) – ( a0 + 2a1 ) ( high vs. low negative urgency juniors ). Estimates Label Estimate Standard Error DF t Value Pr > |t| High vs low freshman 0.2879 0.1200 534 2.40 0.0168 High vs low sophomore 0.6876 0.07930 8.67 <.0001 High vs low junior 1.0873 0.1308 8.31

A second linear mixed model As noted earlier, our first linear mixed model can be expressed as a linear regression model. How, then, does linear mixed modeling go beyond linear regression ? The answer is that we may also allow each subject to have his/her own personal intercept and slope, not merely choose from among three intercepts and three slopes. The personal intercept and slope may be related to negative urgency and to unmeasured or unmeasurable characteristics.

A second linear mixed model More specifically, we may propose the following: Yjk = b0 + (a0 – b0) X1j + (c0 – b0) X2j + P1j ( b1 + (a1 – b1) X1j + (c1 – b1) X2j + P2j ) k + error. Above, P1j and P2j are unobserved zero-mean variables that adjust the intercept and slope for subject j. Thus, the interpretations of a0 , b0 , c0 , a1 , b1 , c1 are subtly altered. They are now the average intercepts and slopes for subjects who are low, average, and high on negative urgency. Even so, our research questions are still addressed by estimating a0 , b0 , c0 , a1 , b1 , c1.

A second linear mixed model While we can “predict” P1j and P2j from the data, in practice this is rarely done. However, their variances and covariance are routinely estimated. Covariance Parameter Estimates Cov Parm Subject Estimate Standard Error Z Value Pr Z UN(1,1) ID 0.02333 0.03466 0.67 0.2505 UN(2,1) 0.007716 0.02430 0.32 0.7509 UN(2,2) 0.07522 0.02771 2.71 0.0033 Residual 0.2621 0.02868 9.14 <.0001 Solution for Fixed Effects Effect negurgstratum Estimate Standard Error DF t Value Pr > |t| 0.8689 0.07399 160 11.74 <.0001 1 1.0573 0.04924 21.47 2 1.1633 0.06829 17.04 Time*negurgstratum -0.1245 0.07279 -1.71 0.0892 0.06846 0.04862 1.41 0.1610 0.2580 0.06705 3.85 0.0002

A second linear mixed model Which model is better: the first or second ? Conceptually, the second model is appealing because P1j and P2j induce correlations among the repeated observations on subject j. Thus, we avoid the unrealistic assumption, present in linear regression, that observations are independent. Empirically, we may examine a model selection criterion such as the BIC; a smaller value is better. Here are results for the first and second models. Fit Statistics -2 Res Log Likelihood 1072.2 AIC (smaller is better) 1074.2 AICC (smaller is better) BIC (smaller is better) 1078.5 Fit Statistics -2 Res Log Likelihood 1014.2 AIC (smaller is better) 1022.2 AICC (smaller is better) 1022.3 BIC (smaller is better) 1035.4

A third linear mixed model So far we have treated negative urgency as categorical, but this is not necessary and perhaps not optimal. Let us now consider the following: Yjk = ( d0 + e0 Nj + P1j ) + ( d1 + e1 Nj + P2j ) k + error. Above, Nj denotes the continuous negative urgency variable, while P1j and P2j are, as before, adjustments to the intercept and slope.

A third linear mixed model Since negative urgency was expressed as a Z score, d0 and d1 are the average intercept and slope among those average on negative urgency. Likewise, d0 + e0 and d1 + e1 are the average intercept and slope among those one standard deviation above average on negative urgency. And, d0 – e0 and d1 – e1 are the average intercept and slope among those one standard deviation below average on negative urgency.

A third linear mixed model We estimate the variances and covariance of P1j and P2j as well as estimating d0 , e0 , d1 , e1 . Covariance Parameter Estimates Cov Parm Subject Estimate Standard Error Z Value Pr Z UN(1,1) ID 0.01952 0.03453 0.57 0.2859 UN(2,1) 0.005827 0.02425 0.24 0.8101 UN(2,2) 0.07199 0.02743 2.62 0.0043 Residual 0.2633 0.02885 9.13 <.0001 Solution for Fixed Effects Effect Estimate Standard Error DF t Value Pr > |t| Intercept 1.0414 0.03495 198 29.79 <.0001 NegUrg 0.1152 0.03613 160 3.19 0.0017 Time 0.07230 0.03438 178 2.10 0.0369 NegUrg*Time 0.1446 0.03533 4.09

A third linear mixed model In addition, we may estimate linear combinations of d0 , e0 , d1 , e1 . For example, 2e0 compares freshmen one standard deviation above to freshmen one standard deviation below, 2e0 + 2e1 compares sophomores, and 2e0 + 4e1 compares juniors. Moreover, the BIC prefers this model over either of the first two. Estimates Label Estimate Standard Error DF t Value Pr > |t| High vs low freshman 0.2304 0.07225 160 3.19 0.0017 High vs low sophomore 0.5196 0.06747 7.70 <.0001 High vs low junior 0.8087 0.1178 6.87 Fit Statistics -2 Res Log Likelihood 1004.5 AIC (smaller is better) 1012.5 AICC (smaller is better) 1012.6 BIC (smaller is better) 1025.7

What’s next ? After a short break, we will launch SAS and examine the PROC MIXED implementations of the three linear mixed models as well as the exploratory data analyses that preceded them. In addition to replicating the results shown in this presentation, we will discuss some potentially useful modifications of and additions to the SAS code ( e.g., how to estimate other linear combinations of coefficients in PROC MIXED).