Linear and generalized linear mixed effects models

Slides:



Advertisements
Similar presentations
Generalised linear mixed models in WinBUGS
Advertisements

Chapter 12 Inference for Linear Regression
Pattern Recognition and Machine Learning
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Correlation and regression
Objectives (BPS chapter 24)
Correlation and Regression. Spearman's rank correlation An alternative to correlation that does not make so many assumptions Still measures the strength.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Business Statistics - QBM117 Statistical inference for regression.
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Regression Chapter 14.
Inference for regression - Simple linear regression
Modelling non-independent random effects in multilevel models William Browne Harvey Goldstein University of Bristol.
The Examination of Residuals. The residuals are defined as the n differences : where is an observation and is the corresponding fitted value obtained.
Random Sampling, Point Estimation and Maximum Likelihood.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
The Examination of Residuals. Examination of Residuals The fitting of models to data is done using an iterative approach. The first step is to fit a simple.
Ch4 Describing Relationships Between Variables. Section 4.1: Fitting a Line by Least Squares Often we want to fit a straight line to data. For example.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Simulation of the matrix Bingham-von Mises- Fisher distribution, with applications to multivariate and relational data Discussion led by Chunping Wang.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Regression Analysis Week 8 DIAGNOSTIC AND REMEDIAL MEASURES Residuals The main purpose examining residuals Diagnostic for Residuals Test involving residuals.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
Copyright ©2011 Brooks/Cole, Cengage Learning Inference about Simple Regression Chapter 14 1.
28. Multiple regression The Practice of Statistics in the Life Sciences Second Edition.
I271B QUANTITATIVE METHODS Regression and Diagnostics.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
F73DA2 INTRODUCTORY DATA ANALYSIS ANALYSIS OF VARIANCE.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Markov Chain Monte Carlo in R
Estimating standard error using bootstrap
The normal distribution
Lecture Slides Elementary Statistics Twelfth Edition
Chapter 14: More About Regression
Estimation of Gene-Specific Variance
CHAPTER 12 More About Regression
Inference about the slope parameter and correlation
23. Inference for regression
The simple linear regression model and parameter estimation
Inference for Regression
Chapter 7. Classification and Prediction
Statistical Modelling
Regression Analysis: Statistical Inference
Statistics for the Social Sciences
Model validation and prediction
Bayesian data analysis
Inference for Regression
Linear Mixed Models in JMP Pro
CHAPTER 12 More About Regression
The Practice of Statistics in the Life Sciences Fourth Edition
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
Basic Estimation Techniques
I271B Quantitative Methods
BA 275 Quantitative Business Methods
CHAPTER 29: Multiple Regression*
CHAPTER 26: Inference for Regression
Predictive distributions
Simple Linear Regression
Simple Linear Regression
The normal distribution
What is Regression Analysis?
STA 291 Summer 2008 Lecture 23 Dustin Lueker.
SIMPLE LINEAR REGRESSION
CHAPTER 12 More About Regression
Parametric Methods Berlin Chen, 2005 References:
The Examination of Residuals
CHAPTER 12 More About Regression
STA 291 Spring 2008 Lecture 23 Dustin Lueker.
Linear Regression and Correlation
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Linear and generalized linear mixed effects models 康佩

Recall: ← ← → between-group sampling model regression model (linear/generalized linear regression model) ← → within-group sampling model

1. Linear mixed effects models 1 1. Linear mixed effects models 1.1 A Gibbs algorithm for posterior approximation 1.2 Analysis of the math score data 2. Generalized linear mixed effects models 2.1 A Metropolis-Gibbs algorithm for posterior approximation 2.2 Analysis of tumor location data

1. Linear mixed effects models Example: math score data described in Section 8.4 (including math scores of 10th grade childer from 100 different large urban public high schools) Objective: to examine the relationship between math score and another variable, socioeconomic status(SES). Model fitting: Reviewing the example in Chapter 8, it seems possible that the relationship between math score and SES might vary from school to school as well. Thus, linear regression models are fitted for each of the 100 schools.

Results(OLS) The first panel shows that a large majority show an increase in expected math score with increasing SES, although a few show a negative relationship. The second and third panels reveal that extreme least squares estimates would be poduced when the sample size is small.

Remedy: using a hierarchical model to stabilize the estimates for small sample size schools by sharing information across groups Expressed symbolically, our within-group sampling model is xi,j is a p×1 vector for observation i in group j. Yi,j ,...,Ynj,j→Yj, Xi,j ,...,Xnj,j→Xj (11.1) → equal to Yj ~multivariate normal(Xjbj,s2I)

Rewriting the between-group sampling model as our between-group sampling model is This hierarchical regression model → called a linear mixed effects model Rewriting the between-group sampling model as Plugging this into our within-group regression model gives In this parameterization, q→a fixed effect , g1,...,gm→random effects (The name "mixed effects model" comes from the fact that the regression model contains both fixed and random effects.)

1.1 A Gibbs algorithm (Given a prior distribution for (q,S,s2) and having observed Y1=y1,..., Ym=ym, a Bayesian analysis proceeds by computing the posterior distribution p(b1,...,bm,q,S,s2|X1,...,Xm,y1,...,ym)) Prior distributions

Full conditional distributions {bj|yj,Xj,q,S,s2}~multivariate normal(qm,Sm), where {q|b1,...,bm,S}~multivariate normal(mm,Lm), where {S|q,b1,...,bm}~inverse-Wishart(h0+m,[S0+Sq]-1), where Chapter 9 (P155) Chapter 7 (P108) Chapter 7 (P111) s2~inverse-gamma where Chapter 8 (P135)

1.2 Analysis of the math score data Yj~multivariate normal(Xjbj,s2I), bj~multivariate normal(q,S) q~multivariate normal(m0,L0) prior distribution S~inverse-Wishart(h0,S0-1) s2~inverse-gamma(n0/2,n0s02/2) m0=E(bols), L0=Var(bols)=(XTX)-1s2 h0=p+2=4, S0=L0 n0=1, sv=(var(Y1),...,var(Y100)), s02=E(sv) S~inverse-Wishart(h0,S0-1) E(S)=(h0-p-1)-1 S0

Result (Running a Gibbs sampler for 10,000 scans and saving every 10th scan produces a sequence of 1,000 values for each parameter. ) (1) Each sequence has a fairly low autocorrelation. (For example, the lag-10 autocorrelations of q1 and q2 are -0.024 and 0.038.) (2) Thus, these simulated values can be used to make Monte Carlo approximations to various posterior quantities of interest.

A 95% quantile-based posterior confidence interval for q2 is (1. 83,2 A 95% quantile-based posterior confidence interval for q2 is (1.83,2.96). The fact that q2 is extremely unlikely to be negative only indicates that the population average of school-level slopes is positive. It does not indicate that any given within-school slope cannot be negative. Notice that this posterior predictive distribution of b2 is much more spread out than the posterior distribution of q2, reflecting the heterogeneity in slopes across schools. Fig. 11.3 The panel plots the posterior density of the expected slope q2 of a randomly sampled school, as well as the posterior predictivd distribution of a randomly sampled slope.

(3) The hierarchical model is able to share information across groups, Result (3) The hierarchical model is able to share information across groups, shrinking extreme regression lines towards the across-group average. Fig. 11.3 Relationshio between SES and math score. The panel gives posterior expectations of the 100 school-specific regression lines, with the average line given in black.

generalized linear mixed effects model 2. Generalized linear mixed effects models (A model→ linear mixed effects models + generalized linear models) a hierarchical data structure + the normal model for the within-group variation is not appropriate generalized linear mixed effects model (For example, if the variable Y were binary or a count, then it's more appropriate to use logistic or Poisson regression models, respectively.)

A basic generalized linear mixed model is as follows: p(y|bTx,g) → a density whose mean depends on bTx g → an additional parameter often representing variance or scale Example: 1. In the normal model p(y|bTx,g)=dnorm(y,bTx,g1/2), g represents the variance. 2. In the Poisson model p(y|bTx)=dpois(exp{bTx}), there is no g parameter.

2.1 A Metropolis-Gibbs algorithm for posterior approximation (For nonnormal generalized linear mixed models, it's suggested to use a Metropolis- Hastings algorithm, using a combination of Gibbs steps for updating (q,S) with a Metropolis step for updating each bj. In what follows we assume there is no g parameter.) Gibbs steps for (q,S) {q|b1,...,bm,S}~multivariate normal(mm,Lm), where {S|q,b1,...,bm}~inverse-Wishart(h0+m,[S0+Sq]-1), where

well-mixing Markov chain.) Metropolis step for bj (In many cases, setting Vj(s) equal to a scaled version of S(s) produces a well-mixing Markov chain.) proposal distribution

A Metropolis-Hastings approximation algorithm (Putting the steps described above together results in the following Metropolis-Hastings algorithm)

2.2 Analysis of tumor location data Example: the intestine of each of 21 sample mice was divided into 20 sections and the number of tumors occuring in each section was recorded. Objective: the relationship between the number of intestinal tumor and the location of intestine A hierarchical model with mouse-specific effects may be appropriate. Fig. 11.4 The panel gives mouse-specific tumor counts as a function of location in gray, with a population average in black.

model selection within-group model: Poisson distribution with a log-link Yx,j ~Poisson[exp(bTx)] (Yx,j → mouse j's tumor count at location x of their intestine) between-group model: b1,...,bm ~i.i.d. multivariate normal(q,S) model fitting Yx,j ~Poisson[exp(bTx)] fj(x)→bTx fj → polynomial, fj(x)=b1,j+b2,jx+b3,jx2+...+bp,jxp-1

model fitting The fourth-degree polynomial fits the log average tumor count function rather well. [xi=(1,xi,xi2,xi3,xi4)T] Fig. 11.4. The panel gives quadratic, cubic and quartic polynomial fits to the log sample average tumor count.

Specify prior distribution

A Metropolis-Gibbs sampler (After a bit of trial and error, it turns out that a multivariate normal(bj(s),S(s)/2) proposal distribution yields an acceptance rate of about 31% and a reasonably well-mixing Markov chain.)

A Metropolis-Gibbs sampler

A Metropolis-Gibbs sampler

Results (Running the Markov chain for 50,000 scans and saving the values every 10th scan gives 5,000 approximate posterior samples for each parameter. ) (1)The difference in the width of the confidence bands of the second panel ← the estimated across-mouse heterogeneity (2)The difference between the third panel and the second panel ←the variability of a Poisson random variable Y around its expected value exp(bTx)

Thank you!