Lecture 6 Generalized Linear Models Olivier MISSA, Advanced Research Skills.

Slides:



Advertisements
Similar presentations
Continued Psy 524 Ainsworth
Advertisements

AP Statistics Course Review.
© Department of Statistics 2012 STATS 330 Lecture 32: Slide 1 Stats 330: Lecture 32.
Log-linear and logistic models Generalised linear model ANOVA revisited Log-linear model: Poisson distribution logistic model: Binomial distribution Deviances.
Logistic Regression Example: Horseshoe Crab Data
Workshop in R & GLMs: #2 Diane Srivastava University of British Columbia
Part V The Generalized Linear Model Chapter 16 Introduction.
Logistic Regression.
Logistic Regression Predicting Dichotomous Data. Predicting a Dichotomy Response variable has only two states: male/female, present/absent, yes/no, etc.
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Modeling Wim Buysse RUFORUM 1 December 2006 Research Methods Group.
Generalised linear models
Log-linear and logistic models Generalised linear model ANOVA revisited Log-linear model: Poisson distribution logistic model: Binomial distribution Deviances.
Log-linear and logistic models
Generalised linear models Generalised linear model Exponential family Example: logistic model - Binomial distribution Deviances R commands for generalised.
Nemours Biomedical Research Statistics April 23, 2009 Tim Bunnell, Ph.D. & Jobayer Hossain, Ph.D. Nemours Bioinformatics Core Facility.
Generalised linear models Generalised linear model Exponential family Example: Log-linear model - Poisson distribution Example: logistic model- Binomial.
Some standard univariate probability distributions
OMS 201 Review. Range The range of a data set is the difference between the largest and smallest data values. It is the simplest measure of dispersion.
Linear and generalised linear models
Linear and generalised linear models
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Maximum likelihood (ML)
Regression Model Building Setting: Possibly a large set of predictor variables (including interactions). Goal: Fit a parsimonious model that explains variation.
Transforming the data Modified from: Gotelli and Allison Chapter 8; Sokal and Rohlf 2000 Chapter 13.
Logistic Regression with “Grouped” Data Lobster Survival by Size in a Tethering Experiment Source: E.B. Wilkinson, J.H. Grabowski, G.D. Sherwood, P.O.
Checking Regression Model Assumptions NBA 2013/14 Player Heights and Weights.
Logistic Regression and Generalized Linear Models:
Chapter 13: Inference in Regression
New Ways of Looking at Binary Data Fitting in R Yoon G Kim, Colloquium Talk.
7.1 - Motivation Motivation Correlation / Simple Linear Regression Correlation / Simple Linear Regression Extensions of Simple.
Choosing and using statistics to test ecological hypotheses
© Department of Statistics 2012 STATS 330 Lecture 26: Slide 1 Stats 330: Lecture 26.
Introduction to Generalized Linear Models Prepared by Louise Francis Francis Analytics and Actuarial Data Mining, Inc. October 3, 2004.
Lecture 5 Linear Mixed Effects Models
Generalized Linear Models II Distributions, link functions, diagnostics (linearity, homoscedasticity, leverage)
Repeated Measures  The term repeated measures refers to data sets with multiple measurements of a response variable on the same experimental unit or subject.
Logistic regression. Analysis of proportion data We know how many times an event occurred, and how many times did not occur. We want to know if these.
November 5, 2008 Logistic and Poisson Regression: Modeling Binary and Count Data LISA Short Course Series Mark Seiss, Dept. of Statistics.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Lecture 7 GLMs II Binomial Family Olivier MISSA, Advanced Research Skills.
© Department of Statistics 2012 STATS 330 Lecture 20: Slide 1 Stats 330: Lecture 20.
Tutorial 4 MBP 1010 Kevin Brown. Correlation Review Pearson’s correlation coefficient – Varies between – 1 (perfect negative linear correlation) and 1.
A preliminary exploration into the Binomial Logistic Regression Models in R and their potential application Andrew Trant PPS Arctic - Labrador Highlands.
Applied Statistics Week 4 Exercise 3 Tick bites and suspicion of Borrelia Mihaela Frincu
Count Data. HT Cleopatra VII & Marcus Antony C c Aa.
Lecture 3 Linear Models II Olivier MISSA, Advanced Research Skills.
© Department of Statistics 2012 STATS 330 Lecture 22: Slide 1 Stats 330: Lecture 22.
Université d’Ottawa - Bio Biostatistiques appliquées © Antoine Morin et Scott Findlay :32 1 Logistic regression.
Logistic Regression. Example: Survival of Titanic passengers  We want to know if the probability of survival is higher among children  Outcome (y) =
Statistics 2: generalized linear models. General linear model: Y ~ a + b 1 * x 1 + … + b n * x n + ε There are many cases when general linear models are.
© Department of Statistics 2012 STATS 330 Lecture 24: Slide 1 Stats 330: Lecture 24.
Dependent Variable Discrete  2 values – binomial  3 or more discrete values – multinomial  Skewed – e.g. Poisson Continuous  Non-normal.
Logistic Regression and Odds Ratios Psych DeShon.
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
R Programming/ Binomial Models Shinichiro Suna. Binomial Models In binomial model, we have one outcome which is binary and a set of explanatory variables.
F73DA2 INTRODUCTORY DATA ANALYSIS ANALYSIS OF VARIANCE.
Unit 32: The Generalized Linear Model
Transforming the data Modified from:
BINARY LOGISTIC REGRESSION
Logistic regression.
A priori violations In the following cases, your data violates the normality and homoskedasticity assumption on a priori grounds: (1) count data  Poisson.
CHAPTER 7 Linear Correlation & Regression Methods
Generalized Linear Models
12 Inferential Analysis.
Checking Regression Model Assumptions
Checking Regression Model Assumptions
12 Inferential Analysis.
Logistic Regression with “Grouped” Data
Introductory Statistics
Presentation transcript:

Lecture 6 Generalized Linear Models Olivier MISSA, Advanced Research Skills

2 Outline Continue exploring options available when assumptions of classical linear models are untenable. In this lecture: What can we do when observations are not continuous and the residuals are not normally distributed nor identically distributed ?

3 Defined by three assumptions: (1) the response variable is continuous. (2) the residuals ( ε ) are normally distributed and... (3)... independently (3a) and identically distributed (3b). Today, we will consider a range of options available when assumptions (1) (2) and/or (3b) are not verified. Classical Linear Models

4 Many situations exist: The response variable could be (1) a count (number of individuals in a population) (number of species in a community) (2) a proportion (proportion "cured" after treatment) (proportion of threatened species) (3) a categorical variable (breeding/non-breeding) (different phenotypes) (4) a strictly positive value (esp. time to success) (or time to failure) (... ) and so forth Non-continuous response variable

5 These types of non-continuous variables also tend to deviate from the assumptions of Normality (assumption #2) and Homoscedasticity (assumption #3b) (1) A count variable often follows a Poisson distribution (where the variance increases linearly with the mean) (2) A proportion often follows a Binomial distribution (where the variance reaches a maximum for intermediate values and a minimum at either end: 0% or 100%) Added difficulties

6 These types of non-continuous variables also tend to deviate from the assumptions of Normality (assumption #2) and Homoscedasticity (assumption #3b). (3) A categorical variable tends to follow a Binomial distribution (when the variable has only two levels) or a Multinomial distribution (when the variable has more than two levels) (4) Time to success/failure can follow an exponential distribution or an inverse Gaussian distribution (the latter having a variance increasing more quickly than the mean). Added difficulties

7 Many of these situations can be unified under a central framework. Since all these distributions (and a few more) belong to the exponential family of distributions. Fortunately Probability density function (if y is continuous) Probability mass function (if y is discrete) Canonical (location) parameter Dispersion parameter Canonical form mean variance

8 The Normal distribution Probability density function Canonical form   Canonical (location) parameter Dispersion parameter

9 The Poisson distribution Probability mass function Canonical form   = 1 = 1 Canonical (location) parameter Dispersion parameter

10 The Binomial distribution Probability mass function Canonical form   = 1 = 1 Canonical (location) parameter Dispersion parameter

11 Why is that remotely useful ? 1) A single algorithm (maximum likelihood) will cope with all these situations. 2) Different types of Variance can be accommodated When Var is constant -> Normal (Gaussian) When Var increases linearly with the mean -> Poisson When Var has a humped back shape -> Binomial When Var increases as the square of the mean -> Gamma (means the coefficient of variation remains constant) When Var increases as the cube of the mean -> inverse Gaussian 3) Most types of data are thus effectively covered

12 Two ways to cope with non-independent observations When design is balanced ( "equal sample size" ) We can use factors to partition our observations in different "groups" and analyse them as an ANOVA or ANCOVA. We already know how to do that (when factors are "crossed") We just need to figure out how to cope with nested factors. When design is unbalanced ( "uneven sample size" ) Mixed effect models are then called for. Non-independent Observations

13 How does it work ? 1) You need to specify the family of distribution to use 2) You need to specify the link function linear predictor link function For each type of variable the "natural" link function to use is indicated by the canonical parameter Link NormalIdentity Poisson Log Binomial Logit GammaInverse Inv.Gaussian Inverse square

14 Count variable This type of response variable often follows a Poisson distribution with Variance increasing in direct relation with the Mean. The family to use is Poisson and the canonical link is log. Example: What are the environmental variables associated with plant diversity on the Galapagos ? > library(faraway) > data(gala) > names(gala) [1] "Species" "Endemics" "Area" "Elevation" "Nearest" [6] "Scruz" "Adjacent" > attach(gala) Beware some missing data in the original dataset have been filed for convenience. Johnson, M.P. & Raven, P.H. (1973) Science 179(4076):

15 Count variable > summary(gala) Species Endemics Area Min. : 2.00 Min. : 0.00 Min. : st Qu.: st Qu.: st Qu.: Median : Median :18.00 Median : Mean : Mean :26.10 Mean : rd Qu.: rd Qu.: rd Qu.: Max. : Max. :95.00 Max. : Elevation Nearest Scruz Adjacent Min. : Min. : 0.20 Min. : 0.00 Min. : st Qu.: st Qu.: st Qu.: st Qu.: 0.52 Median : Median : 3.05 Median : Median : 2.59 Mean : Mean :10.06 Mean : Mean : rd Qu.: rd Qu.: rd Qu.: rd Qu.: Max. : Max. :47.40 Max. : Max. : > gala <- gala[,-2] ## removing variable "Endemics" > modp <- glm(Species ~., family=poisson, data=gala) by default the link for a Poisson is log

16 Count variable > summary(modp) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.155e e < 2e-16 *** Area e e < 2e-16 *** Elevation 3.541e e < 2e-16 *** Nearest 8.826e e e-06 *** Scruz e e < 2e-16 *** Adjacent e e < 2e-16 *** --- (Dispersion parameter for poisson family taken to be 1) Null deviance: on 29 degrees of freedom Residual deviance: on 24 degrees of freedom AIC: Number of Fisher Scoring iterations: 5 Only valid if the Response variable is indeed following a Poisson Need to be broadly similar also called G-statistic

17 Count variable > (dp <- sum(residuals(modp, type="pearson")^2)/modp$df.res) [1] Pearson's residuals This dispersion parameter (  ) must be calculated. Residual degrees of freedom Suggests that the Variance is 31.8 times the Mean. In statistical terms this is called Overdispersion. In biological terms, it suggests that the counts are not independent from each other but instead are Aggregated (i.e. Clumped). Typically Overdispersed count data follow a Negative Binomial distribution, which is not part of the Exponential families of distribution. It won't be covered here, but it can be approximated as a quasi-Poisson (family="quasipoisson"). If you need it in your future work, you can also try glm.nb (in MASS package)

18 Count variable > summary(modp, dispersion=dp) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) < 2e-16 *** Area e-05 *** Elevation e-13 *** Nearest Scruz Adjacent e-05 *** --- (Dispersion parameter for poisson family taken to be ) Null deviance: on 29 degrees of freedom Residual deviance: on 24 degrees of freedom AIC: The summary table can be adjusted with the dispersion parameter These Values can now be taken at face value

19 Count variable > drop1(modp, test="F") When you have overdispersed data Model: Species ~ Area + Elevation + Nearest + Scruz + Adjacent Df Deviance AIC F value Pr(F) Area *** Elevation e-07 *** Nearest Scruz Adjacent *** --- Warning message: In drop1.glm(modp, test = "F") : F test assumes 'quasipoisson' family The drop1 function can be used to simplify the model AIC values dodgy when quasipoisson is used "Nearest" should probably be removed from the model. Safer to use the F-values and their p-values

20 Count variable > modp2 <- update(modp, ~. - Nearest) > (dp2 <- sum(residuals(modp2, type="pearson")^2)/ modp2$df.res) [1] > summary(modp2, dispersion=dp2) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) < 2e-16 *** Area e-05 *** Elevation e-14 *** Scruz Adjacent e-06 *** --- (Dispersion parameter for poisson family taken to be ) Null deviance: on 29 degrees of freedom Residual deviance: on 25 degrees of freedom AIC:

21 > drop1(modp2, test="F") Model: Species ~ Area + Elevation + Scruz + Adjacent Df Deviance AIC F value Pr(F) Area *** Elevation e-08 *** Scruz Adjacent e-05 *** --- Warning message: In drop1.glm(modp, test = "F") : F test assumes 'quasipoisson' family > modp3 <- update(modp2, ~. – Scruz) > (dp3 <- sum(residuals(modp3, type="pearson")^2)/ modp3$df.res) [1] Count variable

22 > summary(modp3, dispersion=dp3) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) < 2e-16 *** Area e-05 *** Elevation e-14 *** Adjacent e-07 *** --- (Dispersion parameter for poisson family taken to be ) Null deviance: on 29 degrees of freedom Residual deviance: on 26 degrees of freedom AIC: Count variable How good is the model ? 1 – (Res. Dev. / Null Dev.) = %

23 > plot(residuals(modp3) ~ predict(modp3, type="response"), xlab=expression(hat(mu)), ylab="Deviance residuals") Count variable Checking the Model Plotting residuals vs fitted values (Several options) by default the Deviance version In the Original Response Scale

24 Count variable Checking the Model Plotting residuals vs fitted values (Several options) > plot(residuals(modp3) ~ predict(modp3, type="link"), xlab=expression(hat(eta)), ylab="Deviance residuals") both in the linked scale (Log for Poisson) Clearest to inspect "Good Spread"

25 > plot(residuals(modp3, type="response") ~ predict(modp3, type="response"), xlab=expression(hat(mu)), ylab="Response residuals") Count variable Checking the Model Plotting residuals vs fitted values (Several options) both in the original response scale Harder to read

26 > shapiro.test(residuals(modp3, type="deviance")) Shapiro-Wilk normality test data: residuals(modp3, type = "deviance") W = , p-value = Count variable Checking the Model Do the residuals have the right distribution ?