C ENTERING IN HLM. W HY CENTERING ? In OLS regression, we mostly focus on the slope but not intercept. Therefore, raw data (natural X metric) is perfectly.

Slides:



Advertisements
Similar presentations
Questions From Yesterday
Advertisements

Hierarchical Linear Modeling: An Introduction & Applications in Organizational Research Michael C. Rodriguez.
Statistical Analysis Overview I Session 2 Peg Burchinal Frank Porter Graham Child Development Institute, University of North Carolina-Chapel Hill.
3.2 OLS Fitted Values and Residuals -after obtaining OLS estimates, we can then obtain fitted or predicted values for y: -given our actual and predicted.
 Coefficient of Determination Section 4.3 Alan Craig
Evaluating Theoretical Models R-squared represents the proportion of the variance in Y that is accounted for by the model. When the model doesn’t do any.
1 Hierarchical Linear Modeling and Related Methods David A. Hofmann Department of Management Michigan State University Expanded Tutorial SIOP Annual Meeting.
Understanding the General Linear Model
HLM – ESTIMATING MULTI-LEVEL MODELS Hierarchical Linear Modeling.
Bivariate Regression CJ 526 Statistical Analysis in Criminal Justice.
QUALITATIVE AND LIMITED DEPENDENT VARIABLE MODELS.
Multiple Regression Involves the use of more than one independent variable. Multivariate analysis involves more than one dependent variable - OMS 633 Adding.
Econ Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 4. Further Issues.
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 7: Interactions in Regression.
CHAPTER 4 ECONOMETRICS x x x x x Multiple Regression = more than one explanatory variable Independent variables are X 2 and X 3. Y i = B 1 + B 2 X 2i +
1 Hierarchical Linear Modeling David A. Hofmann Kenan-Flagler Business School University of North Carolina at Chapel Hill Academy of Management August,
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 4. Further Issues.
Intro to Statistics for the Behavioral Sciences PSYC 1900
Treatment Effects: What works for Whom? Spyros Konstantopoulos Michigan State University.
Elaboration Elaboration extends our knowledge about an association to see if it continues or changes under different situations, that is, when you introduce.
Longitudinal Data Analysis: Why and How to Do it With Multi-Level Modeling (MLM)? Oi-man Kwok Texas A & M University.
Week 14 Chapter 16 – Partial Correlation and Multiple Regression and Correlation.
So are how the computer determines the size of the intercept and the slope respectively in an OLS regression The OLS equations give a nice, clear intuitive.
SW388R7 Data Analysis & Computers II Slide 1 Multiple Regression – Basic Relationships Purpose of multiple regression Different types of multiple regression.
Introduction to Multilevel Modeling Using SPSS
ECON 6012 Cost Benefit Analysis Memorial University of Newfoundland
Elements of Multiple Regression Analysis: Two Independent Variables Yong Sept
Understanding Multivariate Research Berry & Sanders.
Lecture 3-3 Summarizing r relationships among variables © 1.
Hierarchical Linear Modeling (HLM): A Conceptual Introduction Jessaca Spybrook Educational Leadership, Research, and Technology.
ALISON BOWLING THE GENERAL LINEAR MODEL. ALTERNATIVE EXPRESSION OF THE MODEL.
Correlation and Linear Regression. Evaluating Relations Between Interval Level Variables Up to now you have learned to evaluate differences between the.
Statistics and Econometrics for Business II Fall 2014 Instructor: Maksym Obrizan Lecture notes III # 2. Advanced topics in OLS regression # 3. Working.
Regression Analyses. Multiple IVs Single DV (continuous) Generalization of simple linear regression Y’ = b 0 + b 1 X 1 + b 2 X 2 + b 3 X 3...b k X k Where.
Centering decisions and equivalence in multilevel regression: A re- examination of the issue Georges Van Landeghem K.U.Leuven
Introduction to Multilevel Modeling Stephen R. Porter Associate Professor Dept. of Educational Leadership and Policy Studies Iowa State University Lagomarcino.
Multilevel Linear Modeling aka HLM. The Design We have data at two different levels In this case, 7,185 students (Level 1) Nested within 160 Schools (Level.
Ordinary Least Squares Estimation: A Primer Projectseminar Migration and the Labour Market, Meeting May 24, 2012 The linear regression model 1. A brief.
Part IV Significantly Different: Using Inferential Statistics
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
Chapter 16 Data Analysis: Testing for Associations.
Chapter 13 Multiple Regression
College Prep Stats. x is the independent variable (predictor variable) ^ y = b 0 + b 1 x ^ y = mx + b b 0 = y - intercept b 1 = slope y is the dependent.
Analysis of Covariance Combines linear regression and ANOVA Can be used to compare g treatments, after controlling for quantitative factor believed to.
Political Science 30: Political Inquiry. Linear Regression II: Making Sense of Regression Results Interpreting SPSS regression output Coefficients for.
7.4 DV’s and Groups Often it is desirous to know if two different groups follow the same or different regression functions -One way to test this is to.
ANCOVA. What is Analysis of Covariance? When you think of Ancova, you should think of sequential regression, because really that’s all it is Covariate(s)
General Linear Model.
FIXED AND RANDOM EFFECTS IN HLM. Fixed effects produce constant impact on DV. Random effects produce variable impact on DV. F IXED VS RANDOM EFFECTS.
Analysis of Experiments
Jessaca Spybrook Western Michigan University Multi-level Modeling (MLM) Refresher.
Multiple Regression David A. Kenny January 12, 2014.
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Lecturer: Ing. Martina Hanová, PhD. Business Modeling.
1. Refresher on the general linear model, interactions, and contrasts UCL Linguistics workshop on mixed-effects modelling in R May 2016.
Stats Methods at IC Lecture 3: Regression.
Categorical Variables in Regression
Multiple Regression.
32931 Technology Research Methods Autumn 2017 Quantitative Research Component Topic 4: Bivariate Analysis (Contingency Analysis and Regression Analysis)
Introduction to Multilevel Modeling Using HLM 6
Learning Objectives For two quantitative IVs, you will learn:
Political Science 30: Political Inquiry
Correlation and Regression
Learning Objectives For models with dichotomous intendant variables, you will learn: Basic terminology from ANOVA framework How to identify main effects,
Week 14 Chapter 16 – Partial Correlation and Multiple Regression and Correlation.
Multiple Regression Example
The Least-Squares Regression Line
Soc 3306a: ANOVA and Regression Models
Multiple Regression.
Soc 3306a Lecture 11: Multivariate 4
Presentation transcript:

C ENTERING IN HLM

W HY CENTERING ? In OLS regression, we mostly focus on the slope but not intercept. Therefore, raw data (natural X metric) is perfectly fine for the purpose of the study. The slope indicates expected increase in DV for a unit increase in IV. The intercept represents the expected value of DV when all predictors are 0.

W HY CENTERING ? In HLM, however, we are interested in not only slope, but intercept. We use level l coefficients (intercept and slopes) as outcome variables at level 2 Thus, we need clearly understand the meaning of these outcome variables.

W HY CENTERING ? Intercept in behavior researches sometimes are meaningless. e.g. Y - math achievement. X -IQ. Without centering, the intercept is expected math achievement for a student in school j whose IQ is zero. But we know it does not make sense. Centering is a method to change the meaning of the intercept, especially for.

F OUR POSSIBILITIES FOR LOCATION OF X Natural X metric Centering around the grand mean (grand mean centering): Centering around the level-2 mean (group-mean centering) Other specialized choices of location for X

M EANINGS OF INTERCEPTS UNDER THE FIRST 3 LOCATIONS OF X (1) Example: Y – math achievement. X - IQ score. Natural X metric: expected math achievement for a student in school j whose IQ is zero. Caution: only used it if x=0 is meaningful, not in this case. When X ij =0, µ y =E(Y ij )= β oj

M EANINGS OF INTERCEPTS UNDER THE FIRST 3 LOCATIONS OF X (2) Example: Y - math achievement. X - IQ score. Grand-mean centering ( ): expected math achievement for a student in school j whose IQ is equal to the mean of all students from all schools. The intercept is adjusted mean for group j:

M EANINGS OF INTERCEPTS UNDER THE FIRST 3 LOCATIONS OF X (3) Example: Y - math achievement. X - IQ score. Group-mean centering ( ): expected math achievement for a student in school j whose IQ is equal to the mean of school (group) j. The intercept is unadjusted mean for group j:

C ONSEQUENCES OF CENTERING In both cases, the intercept is more interpretable than the natural X metric alternative. Grand mean centering and natural X metric produce equivalent models (estimates could be recalculated from one model to another), but grand mean centering has computational advantage. Mostly, group mean centering produces non- equivalent model to either natural X metric or grand mean centering.

C HOICE OF CENTERING “there is no statistically correct choice” among the three models. The choice between grand mean (more preferable than natural X metric) and group mean centering “must be determined by theory.” Therefore, if the absolute values of level 1 variable is important, then use grand-mean centering. If the relative position of the person to the group’s mean is important, then use group-centering. Kreft, I, G, G,, De Leeuw, J,, & Aiken, L, S, 1995, The effect of different forms of centering in Hierarchical Linear Models, Multivariate Behavioral Research, 30: 1-21,

E XAMPLE – WITHOUT CENTERING Level-1 model: Mathach ij = β oj +β 1j (SES ij )+r ij Level-2 model : β oj =  00 +  oj β 1j =  10 From Ihui’s “Issues with centering”

E XAMPLE – GRAND MEAN CENTERING Level-1 model: Mathach ij = β oj +β 1j (SES ij -SES..)+r ij Level-2 model : β oj =  00 +  oj β 1j =  10 From Ihui’s “Issues with centering”

E XAMPLE – GROUP MEAN CENTERING Level-1 model: Mathach ij = β oj +β 1j (SES ij -SES. j )+r ij Level-2 model : β oj =  00 +  oj β 1j =  10 From Ihui’s “Issues with centering”

OUTPUT EffectSES(raw score model) SES(Grand mean centered) SES (Group mean centered)  00 (s.e)  10 (s.e) Var(r ij ) Var(  oj ) From Ihui’s “Issues with centering”

REMARKS Under grand-mean centering or no centering, the parameter estimates reflect a combination of person-level effects and compositional effects. But when we use a group-centered predictor, we only estimate the person-level effects. In order not to discard the compositional effects with group-mean centering, level-2 variables should be created to represent the group mean values for each group-mean centered predictor.

E XAMPLE – GROUP MEAN CENTERING Level-1 model: Mathach ij = β oj +β 1j (SES ij -SES. j )+r ij Level-2 model : β oj =  00 +  01 (MEANSES j ) +  oj β 1j =  10

C ENTERING FOR DUMMY VARIABLES (1) Mathach ij = β oj +β 1j X ij +r ij where dummy variable X ij =1 for female, X ij =0 for male for student i in school j Without centering, the intercept is the expected math achievement for male student in school j (i.e., the predicted value for student with X ij =0).

C ENTERING FOR DUMMY VARIABLES (2) Grand mean centering: if a student is female, is equal to the proportion of male students in the sample. If a student is male, is equal to the minus proportion of female students in the sample. For example, we have n 1 male, n 2 female students, the total is n=n 1 +n 2. (X ij =1 female, X ij =0 male). Then, =n 2 /n  For female, =1-n 2 /n=n 1 /n (% of male)  For male, =0-n 2 /n=-n 2 /n (-% of female)

C ENTERING FOR DUMMY VARIABLES (3) Group mean centering: if a student is female, is equal to the proportion of male students in school j. If a student is male, is equal to the minus proportion of female students in school j. For example, we have n 1 male, n 2 female students in school j, the group mean = n 2 /(n 1 +n 2 )=n 2 /n  For female, =n 1 /n (% of male in school j)  For male, =-n 2 /n (-% of female in school j )

W HAT ABOUT THE INTERCEPTS AFTER C ENTERING FOR DUMMY VARIABLES Grand mean centering: the intercept is now the expected math achievement adjusted for the differences among the units in the percentage of female students. Group mean centering: the intercept is still the average outcome for unit j, µ yj.