Linear regression models in matrix terms. The regression function in matrix terms.

Slides:



Advertisements
Similar presentations
2.3 Modeling Real World Data with Matrices
Advertisements

General Linear Model With correlated error terms  =  2 V ≠  2 I.
Applied Informatics Štefan BEREŽNÝ
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
3_3 An Useful Overview of Matrix Algebra
Economics 2301 Lecture 11 Matrix Algebra. Acknowledgement Much of the material on these slides was taken from Krishnan Namboodiri's book, MATRIX ALGEBRA:
Review of Matrix Algebra
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Chapter 2 Section 3 Arithmetic Operations on Matrices.
Matrices MSU CSE 260.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Matrix Definition A Matrix is an ordered set of numbers, variables or parameters. An example of a matrix can be represented by: The matrix is an ordered.
Matrices The Basics Vocabulary and basic concepts.
Intro to Matrices Don’t be scared….
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
CE 311 K - Introduction to Computer Methods Daene C. McKinney
3.8 Matrices.
Chapter 2 Systems of Linear Equations and Matrices Section 2.4 Multiplication of Matrices.
1.3 Matrices and Matrix Operations.
Multiple Linear Regression - Matrix Formulation Let x = (x 1, x 2, …, x n )′ be a n  1 column vector and let g(x) be a scalar function of x. Then, by.
ECON 1150 Matrix Operations Special Matrices
Some matrix stuff.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
1.3 Matrices and Matrix Operations. Definition A matrix is a rectangular array of numbers. The numbers in the array are called the entries in the matrix.
Matrices. A matrix, A, is a rectangular collection of numbers. A matrix with “m” rows and “n” columns is said to have order m x n. Each entry, or element,
8.1 Matrices & Systems of Equations
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Unit 3: Matrices.
Lecture 7 Matrices CSCI – 1900 Mathematics for Computer Science Fall 2014 Bill Pine.
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Prepared by Deluar Jahan Moloy Lecturer Northern University Bangladesh
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Special Topic: Matrix Algebra and the ANOVA Matrix properties Types of matrices Matrix operations Matrix algebra in Excel Regression using matrices ANOVA.
Chapter 2 … part1 Matrices Linear Algebra S 1. Ch2_2 2.1 Addition, Scalar Multiplication, and Multiplication of Matrices Definition A matrix is a rectangular.
MATRICES MATRIX OPERATIONS. About Matrices  A matrix is a rectangular arrangement of numbers in rows and columns. Rows run horizontally and columns run.
1.3 Matrices and Matrix Operations. A matrix is a rectangular array of numbers. The numbers in the arry are called the Entries in the matrix. The size.
Unit 3 Matrix Arithmetic IT Disicipline ITD 1111 Discrete Mathematics & Statistics STDTLP 1 Unit 3 Matrix Arithmetic.
Chapter 1 Section 1.5 Matrix Operations. Matrices A matrix (despite the glamour of the movie) is a collection of numbers arranged in a rectangle or an.
Matrices Digital Lesson. Copyright © by Houghton Mifflin Company, Inc. All rights reserved. 2 A matrix is a rectangular array of real numbers. Each entry.
Matrices and Determinants
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
MATRIX A set of numbers arranged in rows and columns enclosed in round or square brackets is called a matrix. The order of a matrix gives the number of.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
Unit 3: Matrices. Matrix: A rectangular arrangement of data into rows and columns, identified by capital letters. Matrix Dimensions: Number of rows, m,
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
Systems of Equations and Matrices Review of Matrix Properties Mitchell.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Matrix Algebra Definitions Operations Matrix algebra is a means of making calculations upon arrays of numbers (or data). Most data sets are matrix-type.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
10.4 Matrix Algebra. 1. Matrix Notation A matrix is an array of numbers. Definition Definition: The Dimension of a matrix is m x n “m by n” where m =
Matrices Introduction.
MTH108 Business Math I Lecture 20.
Matrices and Matrix Operations
Matrix Operations Monday, August 06, 2018.
Matrix Operations.
Regression.
Matrix Algebra.
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
MATRICES MATRIX OPERATIONS.
2.2 Introduction to Matrices
Matrices and Matrix Operations
Matrix Algebra.
Topic 11: Matrix Approach to Linear Regression
Presentation transcript:

Linear regression models in matrix terms

The regression function in matrix terms

Simple linear regression function for i = 1,…, n

Simple linear regression function in matrix notation

Definition of a matrix An r×c matrix is a rectangular array of symbols or numbers arranged in r rows and c columns. A matrix is almost always denoted by a single capital letter in boldface type.

Definition of a vector and a scalar A column vector is an r×1 matrix, that is, a matrix with only one column. A row vector is an 1×c matrix, that is, a matrix with only one row. A 1×1 “matrix” is called a scalar, but it’s just an ordinary number, such as 29 or σ 2.

Matrix multiplication The Xβ in the regression function is an example of matrix multiplication. Two matrices can be multiplied together: –Only if the number of columns of the first matrix equals the number of rows of the second matrix. –The number of rows of the resulting matrix equals the number of rows of the first matrix. –The number of columns of the resulting matrix equals the number of columns of the second matrix.

Matrix multiplication If A is a 2×3 matrix and B is a 3×5 matrix then matrix multiplication AB is possible. The resulting matrix C = AB has … rows and … columns. Is the matrix multiplication BA possible? If X is an n×p matrix and β is a p×1 column vector, then Xβ is …

Matrix multiplication The entry in the i th row and j th column of C is the inner product (element-by-element products added together) of the i th row of A with the j th column of B.

The Xβ multiplication in simple linear regression setting

Matrix addition The Xβ+ε in the regression function is an example of matrix addition. Simply add the corresponding elements of the two matrices. –For example, add the entry in the first row, first column of the first matrix with the entry in the first row, first column of the second matrix, and so on. Two matrices can be added together only if they have the same number of rows and columns.

Matrix addition For example:

The Xβ+ε addition in the simple linear regression setting

Multiple linear regression function in matrix notation

Least squares estimates of the parameters

Least squares estimates The p×1 vector containing the estimates of the p parameters can be shown to equal: where (X'X) -1 is the inverse of the X'X matrix and X' is the transpose of the X matrix.

Definition of the transpose of a matrix The transpose of a matrix A is a matrix, denoted A' or A T, whose rows are the columns of A and whose columns are the rows of A … all in the same original order.

The X'X matrix in the simple linear regression setting

Definition of the identity matrix The (square) n×n identity matrix, denoted I n, is a matrix with 1’s on the diagonal and 0’s elsewhere. The identity matrix plays the same role as the number 1 in ordinary arithmetic.

Definition of the inverse of a matrix The inverse A -1 of a square (!!) matrix A is the unique matrix such that …

Least squares estimates in simple linear regression setting soap suds so*su soap Find X'X.

Least squares estimates in simple linear regression setting It’s very messy to determine inverses by hand. We let computers find inverses for us. Find inverse of X'X. Therefore:

Least squares estimates in simple linear regression setting soap suds so*su soap Find X'Y.

Least squares estimates in simple linear regression setting The regression equation is suds = soap

Linear dependence The columns of the matrix: are linearly dependent, since (at least) one of the columns can be written as a linear combination of another. If none of the columns can be written as a linear combination of another, then we say the columns are linearly independent.

Linear dependence is not always obvious Formally, the columns a 1, a 2, …, a n of an n×n matrix are linearly dependent if there are constants c 1, c 2, …, c n, not all 0, such that:

Implications of linear dependence on regression The inverse of a square matrix exists only if the columns are linearly independent. Since the regression estimate b depends on (X'X) -1, the parameter estimates b 0, b 1, …, cannot be (uniquely) determined if some of the columns of X are linearly dependent.

The main point about linear dependence If the columns of the X matrix (that is, if two or more of your predictor variables) are linearly dependent (or nearly so), you will run into trouble when trying to estimate the regression function.

Implications of linear dependence on regression soap1 soap2 suds * soap2 is highly correlated with other X variables * soap2 has been removed from the equation The regression equation is suds = soap1

Fitted values and residuals

Fitted values

The vector of fitted values is sometimes represented as a function of the hat matrix H That is:

The residual vector for i = 1,…, n

The residual vector written as a function of the hat matrix

Sum of squares and the analysis of variance table

Analysis of variance table in matrix terms SourceDFSSMSF Regressionp-1p-1 Errorn-pn-p Totaln-1n-1

Sum of squares In general, if you pre-multiply a vector by its transpose, you get a sum of squares.

Error sum of squares

Total sum of squares Previously, we’d write: But, it can be shown that equivalently: where J is a (square) n×n matrix containing all 1’s.

An example of total sum of squares If n = 2: But, note that we get the same answer by:

Analysis of variance table in matrix terms SourceDFSSMSF Regressionp-1p-1 Errorn-pn-p Totaln-1n-1

Model assumptions

Error term assumptions As always, the error terms ε i are: –independent –normally distributed (with mean 0) –with equal variances σ 2 Now, how can we say the same thing using matrices and vectors?

Error terms as a random vector The n×1 random error term vector, denoted as ε, is:

The mean (expectation) of the random error term vector The n×1 mean error term vector, denoted as E(ε), is: Definition AssumptionDefinition

The variance of the random error term vector The n×n variance matrix, denoted as σ 2 (ε), is defined as: Diagonal elements are variances of the errors. Off-diagonal elements are covariances between errors.

The ASSUMED variance of the random error term vector BUT, we assume error terms are independent (covariances are 0), and have equal variances (σ 2 ).

Scalar by matrix multiplication Just multiply each element of the matrix by the scalar. For example:

The ASSUMED variance of the random error term vector

The general linear regression model Putting the regression function and assumptions all together, we get: where: Y is a ( ) vector of response values β is a ( ) vector of unknown parameters X is an ( ) matrix of predictor values ε is an ( ) vector of independent, normal error terms with mean 0 and (equal) variance σ 2 I.