Download presentation
Presentation is loading. Please wait.
Published byAugustus Sims Modified over 9 years ago
1
Linear regression models in matrix terms
2
The regression function in matrix terms
3
Simple linear regression function for i = 1,…, n
4
Simple linear regression function in matrix notation
5
Definition of a matrix An r×c matrix is a rectangular array of symbols or numbers arranged in r rows and c columns. A matrix is almost always denoted by a single capital letter in boldface type.
6
Definition of a vector and a scalar A column vector is an r×1 matrix, that is, a matrix with only one column. A row vector is an 1×c matrix, that is, a matrix with only one row. A 1×1 “matrix” is called a scalar, but it’s just an ordinary number, such as 29 or σ 2.
7
Matrix multiplication The Xβ in the regression function is an example of matrix multiplication. Two matrices can be multiplied together: –Only if the number of columns of the first matrix equals the number of rows of the second matrix. –The number of rows of the resulting matrix equals the number of rows of the first matrix. –The number of columns of the resulting matrix equals the number of columns of the second matrix.
8
Matrix multiplication If A is a 2×3 matrix and B is a 3×5 matrix then matrix multiplication AB is possible. The resulting matrix C = AB has … rows and … columns. Is the matrix multiplication BA possible? If X is an n×p matrix and β is a p×1 column vector, then Xβ is …
9
Matrix multiplication The entry in the i th row and j th column of C is the inner product (element-by-element products added together) of the i th row of A with the j th column of B.
10
The Xβ multiplication in simple linear regression setting
11
Matrix addition The Xβ+ε in the regression function is an example of matrix addition. Simply add the corresponding elements of the two matrices. –For example, add the entry in the first row, first column of the first matrix with the entry in the first row, first column of the second matrix, and so on. Two matrices can be added together only if they have the same number of rows and columns.
12
Matrix addition For example:
13
The Xβ+ε addition in the simple linear regression setting
14
Multiple linear regression function in matrix notation
15
Least squares estimates of the parameters
16
Least squares estimates The p×1 vector containing the estimates of the p parameters can be shown to equal: where (X'X) -1 is the inverse of the X'X matrix and X' is the transpose of the X matrix.
17
Definition of the transpose of a matrix The transpose of a matrix A is a matrix, denoted A' or A T, whose rows are the columns of A and whose columns are the rows of A … all in the same original order.
18
The X'X matrix in the simple linear regression setting
19
Definition of the identity matrix The (square) n×n identity matrix, denoted I n, is a matrix with 1’s on the diagonal and 0’s elsewhere. The identity matrix plays the same role as the number 1 in ordinary arithmetic.
20
Definition of the inverse of a matrix The inverse A -1 of a square (!!) matrix A is the unique matrix such that …
21
Least squares estimates in simple linear regression setting soap suds so*su soap 2 4.0 33 132.0 16.00 4.5 42 189.0 20.25 5.0 45 225.0 25.00 5.5 51 280.5 30.25 6.0 53 318.0 36.00 6.5 61 396.5 42.25 7.0 62 434.0 49.00 ------ ----- ----- 38.5 347 1975.0 218.75 Find X'X.
22
Least squares estimates in simple linear regression setting It’s very messy to determine inverses by hand. We let computers find inverses for us. Find inverse of X'X. Therefore:
23
Least squares estimates in simple linear regression setting soap suds so*su soap 2 4.0 33 132.0 16.00 4.5 42 189.0 20.25 5.0 45 225.0 25.00 5.5 51 280.5 30.25 6.0 53 318.0 36.00 6.5 61 396.5 42.25 7.0 62 434.0 49.00 ------ ----- ----- 38.5 347 1975.0 218.75 Find X'Y.
24
Least squares estimates in simple linear regression setting The regression equation is suds = - 2.68 + 9.50 soap
25
Linear dependence The columns of the matrix: are linearly dependent, since (at least) one of the columns can be written as a linear combination of another. If none of the columns can be written as a linear combination of another, then we say the columns are linearly independent.
26
Linear dependence is not always obvious Formally, the columns a 1, a 2, …, a n of an n×n matrix are linearly dependent if there are constants c 1, c 2, …, c n, not all 0, such that:
27
Implications of linear dependence on regression The inverse of a square matrix exists only if the columns are linearly independent. Since the regression estimate b depends on (X'X) -1, the parameter estimates b 0, b 1, …, cannot be (uniquely) determined if some of the columns of X are linearly dependent.
28
The main point about linear dependence If the columns of the X matrix (that is, if two or more of your predictor variables) are linearly dependent (or nearly so), you will run into trouble when trying to estimate the regression function.
29
Implications of linear dependence on regression soap1 soap2 suds 4.0 8 33 4.5 9 42 5.0 10 45 5.5 11 51 6.0 12 53 6.5 13 61 7.0 14 62 * soap2 is highly correlated with other X variables * soap2 has been removed from the equation The regression equation is suds = - 2.68 + 9.50 soap1
30
Fitted values and residuals
31
Fitted values
32
The vector of fitted values is sometimes represented as a function of the hat matrix H That is:
33
The residual vector for i = 1,…, n
34
The residual vector written as a function of the hat matrix
35
Sum of squares and the analysis of variance table
36
Analysis of variance table in matrix terms SourceDFSSMSF Regressionp-1p-1 Errorn-pn-p Totaln-1n-1
37
Sum of squares In general, if you pre-multiply a vector by its transpose, you get a sum of squares.
38
Error sum of squares
40
Total sum of squares Previously, we’d write: But, it can be shown that equivalently: where J is a (square) n×n matrix containing all 1’s.
41
An example of total sum of squares If n = 2: But, note that we get the same answer by:
42
Analysis of variance table in matrix terms SourceDFSSMSF Regressionp-1p-1 Errorn-pn-p Totaln-1n-1
43
Model assumptions
44
Error term assumptions As always, the error terms ε i are: –independent –normally distributed (with mean 0) –with equal variances σ 2 Now, how can we say the same thing using matrices and vectors?
45
Error terms as a random vector The n×1 random error term vector, denoted as ε, is:
46
The mean (expectation) of the random error term vector The n×1 mean error term vector, denoted as E(ε), is: Definition AssumptionDefinition
47
The variance of the random error term vector The n×n variance matrix, denoted as σ 2 (ε), is defined as: Diagonal elements are variances of the errors. Off-diagonal elements are covariances between errors.
48
The ASSUMED variance of the random error term vector BUT, we assume error terms are independent (covariances are 0), and have equal variances (σ 2 ).
49
Scalar by matrix multiplication Just multiply each element of the matrix by the scalar. For example:
50
The ASSUMED variance of the random error term vector
51
The general linear regression model Putting the regression function and assumptions all together, we get: where: Y is a ( ) vector of response values β is a ( ) vector of unknown parameters X is an ( ) matrix of predictor values ε is an ( ) vector of independent, normal error terms with mean 0 and (equal) variance σ 2 I.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.