Download presentation
Presentation is loading. Please wait.
Published byMaxwell Raven Modified over 9 years ago
1
James R. Stacks, Ph.D. Introduction to Multiple Regression james_stacks@tamu-commerce.edu The best way to have a good idea is to have lots of ideas Linus Pauling
2
Z ’ c = Z p1 + Z p2 + Z p3 Standardized form of a regression equation with three predictor variables
3
Z ’ c = Z p1 + Z p2 + Z p3 Predictor variables (standardized z scores)
4
Z ’ c = Z p1 + Z p2 + Z p3 Standardized regression coefficients
5
Z ’ c = Z p1 + Z p2 + Z p3 Predicted criterion score (z c – z e )
6
Z ’ c = Z p1 + Z p2 + Z p3 Standardized regression coefficients Predictor variables (standardized z scores) Predicted criterion score (z c – z e )
7
Z ’ c = Z p1 + Z p2 + Z p3 Predicted criterion score (z c – z e )
8
Z ’ c = Z p1 + Z p2 + Z p3 Predicted criterion score (z c – z e ) Recall that the predicted criterion score is the is the actual criterion score minus the error Z c = Z p1 + Z p2 + Z p3 + Z e
9
Recall that multiplication of an entire equation by any value results in an equivalent equation: y=bx is the same as yx = bxx or as yx = bx 2
10
The following demonstration of solving for standardized regression coefficients is taken largely from: Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.
11
Z c Z p3 = Z p1 Z p3 + Z p2 Z p3 + Z p3 Z p3 + Z e Z p3 Z c Z p2 = Z p1 Z p2 + Z p2 Z p2 + Z p3 Z p2 + Z e Z p2 Z c Z p1 = Z p1 Z p1 + Z p2 Z p1 + Z p3 Z p1 + Z e Z p1 Let’s write three equivalent forms of the previous multiple regression equation by multiplying the original equation by each of the three predictor variables: (Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.)
12
Now notice all the zz cross products in the equations. Recall that the expected (mean) cross product is something we are familiar with. The unbiased estimate of the cross product for paired z values is: E(cross product) = zz)/(n-1), or, Pearson r ! Z c Z p1 = Z p1 Z p1 + Z p2 Z p1 + Z p3 Z p1 + Z e Z p1 Z c Z p2 = Z p1 Z p2 + Z p2 Z p2 + Z p3 Z p2 + Z e Z p2 Z c Z p3 = Z p1 Z p3 + Z p2 Z p3 + Z p3 Z p3 + Z e Z p3 (Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.)
13
The Pearson product-moment correlation coefficient ( written as r for sample estimate, for parameter ) Z a Z b n-1 i = 1 n r = Where z a and z b are z scores for each person on some measure a and some measure b, and n is the number of people
14
r c p1 = r p1 p1 + r p2 p1 + r p3 p1 + r e p1 r c p2 = r p1 p2 + r p2 p2 + r p3 p2 + r e p2 r c p3 = r p1 p3 + r p2 p3 + r p3 p3 + r e p3 So, I could just as easily write: (Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.)
15
r c p1 = r p1 p1 + r p2 p1 + r p3 p1 + r e p1 r c p2 = r p1 p2 + r p2 p2 + r p3 p2 + r e p2 r c p3 = r p1 p3 + r p2 p3 + r p3 p3 + r e p3 Now, let’s look at some interesting things about the correlation coefficients we have substituted Correlations of variables with themselves are necessarily unity, So the red values are 1 In regression, error by definition is the variance which does not correlate with any variable, thus the blue values are necessarily 0 (Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.)
16
r c p1 = (1) + r p2 p1 + r p3 p1 r c p2 = r p1 p2 + (1) + r p3 p2 r c p3 = r p1 p3 + r p2 p3 + (1) The above system can be written in matrix form: r c p1 r c p2 r c p3 = (1) + r p2 p1 + r p3 p1 r p1 p2 + (1) + r p3 p2 r p1 p3 + r p2 p3 + (1) (Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.)
17
r c p1 r c p2 r c p3 = (1) + r p2 p1 + r p3 p1 r p1 p2 + (1) + r p3 p2 r p1 p3 + r p2 p3 + (1) Note that the matrix on the right side above is a vector, and it is a product of a correlation matrix of the predictor variables and a vector. r c p1 r c p2 r c p3 = (1) r p2 p1 r p3 p1 r p1 p2 (1) r p3 p2 r p1 p3 r p2 p3 (1) (Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.)
18
r c p1 r c p2 r c p3 = 1 r p2 p1 r p3 p1 r p1 p2 1 r p3 p2 r p1 p3 r p2 p3 1 The moral of this story is: assuming all the Pearson correlations among variables are known (they are easily calculated), we can use the equation above to solve for the vector, which is the standardized regression coefficients. Z ’ c = Z p1 + Z p2 + Z p3 (Maruyama, Geoffrey M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications, Inc.)
19
r c p1 r c p2 r c p3 = 1 r p2 p1 r p3 p1 r p1 p2 1 r p3 p2 r p1 p3 r p2 p3 1 This is a matrix equation which can be symbolized as: R iy = R ii B i From algebra, such an equation can obviously be solved for B i by dividing both sides by R ii, but there is no such thing as division in matrix math The matrix notation used here corresponds to your text: Tabachnik, Barbara G. & Fidell, Linda S. (2001). Using multivariate statistics., 4 th Edition. Needham Heights, MA: Allyn & Bacon
20
What is necessary to accomplish the same goal is to multiply both sides of the equation by the inverse of R ii, written as R ii -1. R ii -1 R iy = R ii -1 R ii B i therefore R ii -1 R iy = B i If you have studied the appendix assigned on matrix algebra,you know that, while matrix multiplication is quite simple, matrix inversion is a real chore! The matrix notation used here corresponds to your text: Tabachnik, Barbara G. & Fidell, Linda S. (2001). Using multivariate statistics., 4 th Edition. Needham Heights, MA: Allyn & Bacon
21
r c p1 r c p2 r c p3 = 1 r p2 p1 r p3 p1 r p1 p2 1 r p3 p2 r p1 p3 r p2 p3 1 To get the solution we must find the inverse of the green shaded matrix R ii in order to get R ii -1 for the equation : R ii -1 R iy = B i R iy = R ii B i The matrix notation used here corresponds to your text: Tabachnik, Barbara G. & Fidell, Linda S. (2001). Using multivariate statistics., 4 th Edition. Needham Heights, MA: Allyn & Bacon
24
The following method of inverting a matrix is taken largely from: Swokowski, Earl W. (1979) Fundamentals of College Algebra. Boston, MA: Prindle, Weber & Scmidt
25
The first step is to form a matrix which has the same number of rows as the original correlation matrix of predictors, but has twice as many columns. The original predictor correlations are placed in the left half, and an equal order identity matrix is place in the right half: (Predictor correlations) (Identity matrix) (Swokowski, Earl W. (1979) Fundamentals of College Algebra. Boston, MA: Prindle, Weber & Scmidt)
26
Though a series of calculations called elementary row transformations, the goal is to change all the numbers in the matrix so that the identity matrix is on the left, and a new matrix is on the right: Identity Matrix Inverse Matrix (Swokowski, Earl W. (1979) Fundamentals of College Algebra. Boston, MA: Prindle, Weber & Scmidt)
27
Swokowski, Earl W. (1979) Fundamentals of College Algebra. Boston, MA: Prindle, Weber & Scmidt “ MATRIX ROW TRANSFORMATION THEOREM Given a matrix of a system of linear equations, each of the following transformations results in a matrix of an equivalent system of linear equations: (i) (i)Interchanging any two rows (ii) (ii)Multiplying all of the elements in a row by the same nonzero real number k. (iii) (iii)Adding to the elements in a row k times the corresponding elements of any other row, where k is any real number. “
28
1 st transformation: a 2j = a 2j + (-.488) a 1j
29
2nd transformation: a 3j = a 3j + (-.354) a 1j
30
3rd transformation: a 2j = a 2j. 1 /. 761856
31
4th transformation: a 3j = a 3j + (-.199248) a 2j
32
5th transformation: a 3j = a 3j. 1 /.822574723
33
6th transformation: a 1j = a 1j + (-.488) a 2j
34
7th transformation: a 1j = a 1j + (-.226373488) a 3j
35
8th transformation: a 2j = a 2j + (-.261529737) a 3j
36
Original matrix on left Inverted matrix on right
37
= beta vector inverse of predictor correlations predictor/criterion correlations R ii -1 R iy = B i
38
-.257.150.873 OUR CALCULATIONS VALUES FROM SPSS The difference has to do with rounding error. There are so many transformations in matrix math that all computations must be carried out with many, many significant figures, because the errors accumulate. I only used what was visible in my calculator. Good matrix software should use much more precision. This is a relatively brief equation to solve. Imagine the error that can accumulate with hundreds of matrix transformations. This is a very important point, and one should always be certain the software is using the appropriate degree of precision.,
39
Z ’ c = - 255 Z p1 + 872 Z p2 + 149 Z p3 The regression equation can then be written:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.