Download presentation
Presentation is loading. Please wait.
1
Interpretation of Regression Coefficients
Lecture 7 Review of Lecture 6 Interpretation of Regression Coefficients 2/19/2019 ST3131, Lecture 7
2
Multiple Linear Regression Model:
Review of Lecture 6 Multiple Linear Regression Model: 1 Response variable Y LS Estimators where 2/19/2019 ST3131, Lecture 7
3
Noise variance estimator:
Fitted values: Residuals: Noise variance estimator: Observation Decomposition Squares Decomposition SST = SSR SSE 2/19/2019 ST3131, Lecture 7
4
Methods of Assessment of Linearity /Quality of Linear Fit
A. Coefficient of Determination Proportion of total variability Explained by Linear Regression B. Correlation between responses and fitted values R is called Multiple Correlation Coefficient between Y and multiple predictor variables X1, X2, …, Xp, measuring the linear relationship between Y and X1, X2, …,Xp. 2/19/2019 ST3131, Lecture 7
5
That is Remark: 2/19/2019 ST3131, Lecture 7
6
How to Interpret the Regression Equation?
1 Straight line 2 Plane >= 3 Hyperplane How to Interpret the Regression Coefficients? We have two interpretations. Interpretation 1: from the view of response change 2/19/2019 ST3131, Lecture 7
7
To address this interpretation, let’s use the simplest MLR model:
Conclusion: Interpretation 2: To address this interpretation, let’s use the simplest MLR model: 2/19/2019 ST3131, Lecture 7
8
Results for: P054.txt, the Supervisor Performance data
We use X1 and X2 from the supervisor performance data to illustrate the ideas. Results for: P054.txt, the Supervisor Performance data Regression Analysis: Y versus X1, X2 The regression equation is Y = X X2 Predictor Coef SE Coef T P Constant X X What can we see from the above regression results? (1) Each unit of X1 adds .78 to Y when X2 is fixed (2) Each unit of X2 subtract from Y when X1 is fixed (3). .78 and -.05 represents the effects of X1 and X2 respectively on Y after (X1, Y) (res. (X2,Y)) are linearly adjusted for X2 ( res. X1) 2/19/2019 ST3131, Lecture 7
9
Step 1: SLR Y on X1 (linearly adjust Y on X1)
Let’s consider the effect of X2 on Y as an example. We use the following steps. Step 1: SLR Y on X1 (linearly adjust Y on X1) Step 2: SLR X2 on X1 (linearly adjust X2 on X1) 2/19/2019 ST3131, Lecture 7
10
Step 3. Consider the effect of X2 after Y and X2 are linearly adjusted for X1.
Here stands for the effect of X2 after both Y and X2 are linearly adjusted for X1, which is also the coefficient of X2 in the regression of Y on X1 and X2. Actually, we have 2/19/2019 ST3131, Lecture 7
11
For a general 2-predictor variable MLR model:
In general, we have For a general 2-predictor variable MLR model: Plug in the linear adjustment of X2 on X1: Step 2 That is: Step 1 And Step 3 Moreover 2/19/2019 ST3131, Lecture 7
12
Byproduct: The MLR coefficients and the associated p SLR coefficients are not the same unless all predictor variables are uncorrelated. For example, Remark: In Non-experimental /Observational data, predictor variables are rarely uncorrelated. However, in an experimental setting, the experimental design is often set up to produce uncorrelated predictor variables for simple computation and interpretation of coefficients. 2/19/2019 ST3131, Lecture 7
13
In general, for a MLR model
Similarly In general, for a MLR model That is 2/19/2019 ST3131, Lecture 7
14
After-class Questions:
How does the regression model quantify the effect of a predictor variable? How can you check if an effect of a predictor variable is significant or not? Can you suggest some methods based on what you have learned Chapter 2? 2/19/2019 ST3131, Lecture 7
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.