Lecture 11 (Chapter 9).

Slides:



Advertisements
Similar presentations
SJS SDI_21 Design of Statistical Investigations Stephen Senn 2 Background Stats.
Advertisements

Regression and correlation methods
Inference for Regression
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Topic 6: Introduction to Hypothesis Testing
Chapter 13 Multiple Regression

QUALITATIVE AND LIMITED DEPENDENT VARIABLE MODELS.
Instructor: K.C. Carriere
Chapter 12 Multiple Regression
The Simple Regression Model
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 11 th Edition.
Lecture 9: One Way ANOVA Between Subjects
Correlation and Regression Analysis
Review for Exam 2 Some important themes from Chapters 6-9 Chap. 6. Significance Tests Chap. 7: Comparing Two Groups Chap. 8: Contingency Tables (Categorical.
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Regression Chapter 14.
Generalized Linear Models
Review of Lecture Two Linear Regression Normal Equation
9. Binary Dependent Variables 9.1 Homogeneous models –Logit, probit models –Inference –Tax preparers 9.2 Random effects models 9.3 Fixed effects models.
Regression and Correlation Methods Judy Zhong Ph.D.
Lecture 9: Marginal Logistic Regression Model and GEE (Chapter 8)
Inference for regression - Simple linear regression
LEARNING PROGRAMME Hypothesis testing Intermediate Training in Quantitative Analysis Bangkok November 2007.
Lecture 1 (Chapter 1). Introduction This course describes statistical methods for the analysis of longitudinal data, with a strong emphasis on applications.
Inferences in Regression and Correlation Analysis Ayona Chatterjee Spring 2008 Math 4803/5803.
Lecture 8: Generalized Linear Models for Longitudinal Data.
ALISON BOWLING THE GENERAL LINEAR MODEL. ALTERNATIVE EXPRESSION OF THE MODEL.
Excepted from HSRP 734: Advanced Statistical Methods June 5, 2008.
Biostatistics Case Studies 2008 Peter D. Christenson Biostatistician Session 5: Choices for Longitudinal Data Analysis.
LOGISTIC REGRESSION A statistical procedure to relate the probability of an event to explanatory variables Used in epidemiology to describe and evaluate.
Lecture 8 Simple Linear Regression (cont.). Section Objectives: Statistical model for linear regression Data for simple linear regression Estimation.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
1 GLM I: Introduction to Generalized Linear Models By Curtis Gary Dean Distinguished Professor of Actuarial Science Ball State University By Curtis Gary.
Lecture 3 Linear random intercept models. Example: Weight of Guinea Pigs Body weights of 48 pigs in 9 successive weeks of follow-up (Table 3.1 DLZ) The.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 13 Multiple Regression Section 13.3 Using Multiple Regression to Make Inferences.
© Department of Statistics 2012 STATS 330 Lecture 20: Slide 1 Stats 330: Lecture 20.
Copyright ©2011 Brooks/Cole, Cengage Learning Inference about Simple Regression Chapter 14 1.
Lecture 10. Examples of count data –Number of panic attacks occurring during 6-month intervals after receiving treatment –Number of infant deaths per.
Going from data to analysis Dr. Nancy Mayo. Getting it right Research is about getting the right answer, not just an answer An answer is easy The right.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 12 Testing for Relationships Tests of linear relationships –Correlation 2 continuous.
28. Multiple regression The Practice of Statistics in the Life Sciences Second Edition.
1 STA 617 – Chp10 Models for matched pairs Summary  Describing categorical random variable – chapter 1  Poisson for count data  Binomial for binary.
Multiple Logistic Regression STAT E-150 Statistical Methods.
© Department of Statistics 2012 STATS 330 Lecture 22: Slide 1 Stats 330: Lecture 22.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 10 th Edition.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Example x y We wish to check for a non zero correlation.
Logistic regression (when you have a binary response variable)
Introduction to Multiple Regression Lecture 11. The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (Y) & 2 or more.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Logistic Regression Categorical Data Analysis.
BPS - 5th Ed. Chapter 231 Inference for Regression.
Instructor: R. Makoto 1richard makoto UZ Econ313 Lecture notes.
Stats Methods at IC Lecture 3: Regression.
The simple linear regression model and parameter estimation
Chapter 4: Basic Estimation Techniques
Chapter 14 Introduction to Multiple Regression
Regression and Correlation
Chapter 4 Basic Estimation Techniques
REGRESSION (R2).
Basic Estimation Techniques
Linear Mixed Models in JMP Pro
Multiple Regression Analysis and Model Building
Generalized Linear Models
Introduction to logistic regression a.k.a. Varbrul
Basic Estimation Techniques
Jeffrey E. Korte, PhD BMTRY 747: Foundations of Epidemiology II
Review for Exam 2 Some important themes from Chapters 6-9
Why use marginal model when I can use a multi-level model?
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

Lecture 11 (Chapter 9)

Generalized Linear Mixed Models with Random Effects The logistic regression model with random intercept Example: 2x2 crossover trial Example: Indonesian Children’s Health Study The Poisson regression model with random intercept Example: seizure data

Basic idea: There is natural heterogeneity among subjects. Random Effects GLM Basic idea: There is natural heterogeneity among subjects. Systematic part Random part

Example: 2x2 crossover trial In random effects models, the regression coefficients measure the more direct influence of explanatory variables on the responses for heterogeneous individuals. For example:

Example: 2x2 crossover trial (Recall) Example: This model states that: 1. each person has his/her own probability of positive response under a placebo (B)

Example: 2x2 crossover trial This model also states that: 2. a person’s odds of a normal response are multiplied by when taking drug A, regardless of the initial risk Individual odds of normal response when taking drug (x=1) Individual odds of normal response when on placebo (x=0) (“initial risk”)

In Logistic Models In other words: Marginal model estimates are smaller than random effects model estimates Tests of hypotheses approximately the same in random effects as in marginal

Let β be the vector of regression coefficients under a marginal model Correspondence between regression parameters in random effects and marginal models Let β be the vector of regression coefficients under a marginal model Let β* be the vector of regression coefficients under a random effects model G is the variance of the random effects Ui ~ N(0,G) Random effects model β* > β G=0 => β* = β Marginal model

Estimation of Generalized Linear Mixed Models Setting: f(Yij|Ui) in the exponential family Yi1,…,Yini | Ui are independent Ui ~ f(Ui,G) Maximum Likelihood Estimation Ui is a set of unobserved variables which we integrate out of the likelihood

Maximum Likelihood Estimation of G and β Assume: Ui ~ N(0, G) We can learn about one individual’s coefficients by understanding the variability in coefficients across the population (G)

Maximum Likelihood Estimation of G and β (cont’d) If Gi is small, then rely on population average coefficients to estimate those for an individuals We weight the cross-sectional information more heavily and we borrow strength across subjects If Gi is large, then rely heavily on the data from each individual to estimate their own coefficients We weight the longitudinal information more heavily since comparisons within a subject are likely to be more precise than comparisons among subjects

Maximum Likelihood Estimation “Average-away” the random effects (Ui) Use all the data Use an EM algorithm Use numerical integration

Example: 2x2 crossover trial

Max. Likelihood. Estimation - Example: 2x2 crossover trial

Max. Likelihood. Estimation - Example: 2x2 crossover trial 95% of the subjects would fall between ±(2 x 4.9) logit units of the overall mean This range on the logit scale translates into probabilities between 0 and 1, i.e. some people have little change and others have very high hance of a normal reading in the placebo and the treatment groups

Max. Likelihood. Estimation - Example: 2x2 crossover trial Assuming a constant treatment effect for all persons, the odds of a normal response for a subject are estimated to be = exp(1.9) = 6.7 times higher on the active drug than on the placebo

*robust SE **model-based SE

Assume: constant treatment effect for everybody exp(1.9)=6.7 => odds of a normal response for a subject are 6.7 times higher on the active drug than on the placebo exp(0.57)=1.67 => population average odds of a normal response are 1.67 times higher on the active drug than on the placebo

Example: 2x2 crossover trial Summary Marginal model Random effects model (maximum likelihood) The smaller value from the marginal analysis is consistent with the theoretical inequality:

Example: Indonesian study Logistic regression with random intercept Relative odds of infection associated with Xerophthalmia are: exp(0.57)=1.7, but this is not stat sig different from 1 β* more similar to β because G is smaller

Indonesian Study: Maximum Likelihood Approach With a random effects model, we can address the question of how an individual child’s risk for respiratory infection will change if his/her vitamin A status were to change. We assume that each child has a distinct intercept which represents his/her propensity to infection. We account for correlation by including random intercepts, Ui ~ N(0,G).

Indonesian Study: Maximum Likelihood Approach (cont’d) considerable heterogeneity among children Among children with linear predictor equal to the intercept (-2.2), i.e. children of average age and height, who are female, and who are vitamin A sufficient, about 95% would have a probability of infection between (0.025) and (0.31): Relative odds of infection associated with vitamin A deficiency are exp(0.54)=1.7

Indonesian Study: Maximum Likelihood Approach (cont’d) The longitudinal age effect on the risk of respiratory infection in Model 2 can be explained by the seasonal trend in Model 3 Because of the small heterogeneity (summarized by G), the estimates of the coefficients obtained under a random effects model are similar to those obtained under a marginal model Ratio of the random effects coefficients and marginal coefficients are close to

Poisson model with random intercept: Epileptic seizure example

Poisson model with random intercept: Epileptic seizure example 2 weeks each 8 weeks Randomly assigned to either placebo or progabide

Example: Seizure

Example: Seizure (cont’d)

Example: Seizure (cont’d)

Example: Seizure (cont’d)

Example: Seizure (cont’d) The model does not fit well => Extend the model by including a random slope Ui2

Model 1: random intercept only (similar result to conditional likelihood) Model 2: random intercept + random slope

Poisson-Gaussian Random Effects Models: Epileptic Seizure Here, we assume there might be heterogeneity among subjects in the ratio of the expected seizure counts before and after randomization. This degree of heterogeneity can be measured by G22.

Poisson-Gaussian Random Effects Models: Epileptic Seizure We use maximum likelihood estimation.

Poisson-Gaussian Random Effects Models: Epileptic Seizure The ratio of seizure counts in the placebo post-to-pre treatment is subject specific: The ratio of seizure counts in the progabide post-to-pre treatment is subject specific:

Poisson-Gaussian Random Effects Models: Epileptic Seizure Results The estimate of G22 is statistically significant, therefore the data give support for between-subject variability in the ratio of the expected seizure counts before and after randomization. subjects in the placebo group with Ui2=0 have expected seizure rates after treatment, which are estimated to be roughly similar to those before treatment

Poisson-Gaussian Random Effects Models: Epileptic Seizure Results (cont’d) Among subjects in the progabide group with Ui2=0, the seizure rates are reduced after treatment by about 27%, The treatment seems to have a modest effect: Without patient 207, the evidence for progabide is stronger: