Stats 443.3 & 851.3 Linear Models.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
Chapter 4 Euclidean Vector Spaces
3D Geometry for Computer Graphics
Applied Informatics Štefan BEREŽNÝ
Chapter 4 Systems of Linear Equations; Matrices Section 6 Matrix Equations and Systems of Linear Equations.
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Ch 7.2: Review of Matrices For theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
Orthogonality and Least Squares
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
INDR 262 INTRODUCTION TO OPTIMIZATION METHODS LINEAR ALGEBRA INDR 262 Metin Türkay 1.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Stats Multivariate Data Analysis Stats
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Chap. 6 Linear Transformations
Matrices Matrices For grade 1, undergraduate students For grade 1, undergraduate students Made by Department of Math.,Anqing Teachers college.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Chapter 2 … part1 Matrices Linear Algebra S 1. Ch2_2 2.1 Addition, Scalar Multiplication, and Multiplication of Matrices Definition A matrix is a rectangular.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Matrices and Determinants
Stats & Summary. The Woodbury Theorem where the inverses.
Linear Algebra Chapter 2 Matrices.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Lecture XXVI.  The material for this lecture is found in James R. Schott Matrix Analysis for Statistics (New York: John Wiley & Sons, Inc. 1997).  A.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
Introduction to Vectors and Matrices
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Matrices and Vectors Review Objective
Systems of First Order Linear Equations
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Symmetric Matrices and Quadratic Forms
Introduction to Vectors and Matrices
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Stats 443.3 & 851.3 Linear Models

Assignments, Term tests - 40% Final Examination - 60% Instructor: W.H.Laverty Office: 235 McLean Hall Phone: 966-6096 Lectures: M W F 9:30am - 10:20am Geol 269 Lab 2:30pm – 3:30 pm Tuesday Evaluation: Assignments, Term tests - 40% Final Examination - 60% 2

The lectures will be given in Power Point 3

Course Outline 4

Introduction 5

Review of Linear Algebra and Matrix Analysis 6

Review of Probability Theory and Statistical Theory 7

Multivariate Normal distribution 8

The General Linear Model Theory and Application 9

Special applications of The General Linear Model Analysis of Variance Models, Analysis of Covariance models 10

A chart illustrating Statistical Procedures Independent variables Dependent Variables Categorical Continuous Continuous & Categorical Multiway frequency Analysis (Log Linear Model) Discriminant Analysis ANOVA (single dep var) MANOVA (Mult dep var) MULTIPLE REGRESSION (single dep variable) MULTIVARIATE (multiple dependent variable) ANACOVA (single dep var) MANACOVA (Mult dep var) ??

A Review of Linear Algebra With some Additions

Matrix Algebra Definition An n × m matrix, A, is a rectangular array of elements n = # of columns m = # of rows dimensions = n × m

Definition A vector, v, of dimension n is an n × 1 matrix rectangular array of elements vectors will be column vectors (they may also be row vectors)

A vector, v, of dimension n can be thought a point in n dimensional space

v3 v2 v1

Matrix Operations Addition Let A = (aij) and B = (bij) denote two n × m matrices Then the sum, A + B, is the matrix The dimensions of A and B are required to be both n × m.

Scalar Multiplication Let A = (aij) denote an n × m matrix and let c be any scalar. Then cA is the matrix

Addition for vectors v3 v2 v1

Scalar Multiplication for vectors

Matrix multiplication Let A = (aij) denote an n × m matrix and B = (bjl) denote an m × k matrix Then the n × k matrix C = (cil) where is called the product of A and B and is denoted by A∙B

In the case that A = (aij) is an n × m matrix and B = v = (vj) is an m × 1 vector Then w = A∙v = (wi) where is an n × 1 vector w3 v3 w2 v2 w1 v1

Definition An n × n identity matrix, I, is the square matrix Note: AI = A IA = A.

Definition (The inverse of an n × n matrix) Let A denote the n × n matrix Let B denote an n × n matrix such that AB = BA = I, If the matrix B exists then A is called invertible Also B is called the inverse of A and is denoted by A-1

Note: Let A and B be two matrices whose inverse exists. Let C = AB Note: Let A and B be two matrices whose inverse exists. Let C = AB. Then the inverse of the matrix C exists and C-1 = B-1A-1. Proof C[B-1A-1] = [AB][B-1A-1] = A[B B-1]A-1 = A[I]A-1 = AA-1=I

The Woodbury Theorem where the inverses

Proof: Let Then all we need to show is that H(A + BCD) = (A + BCD) H = I.

The Woodbury theorem can be used to find the inverse of some pattern matrices: Example: Find the inverse of the n × n matrix

where hence and

Thus Now using the Woodbury theorem

Thus

where

Note: for n = 2

Also

Now

and This verifies that we have calculated the inverse

Block Matrices Let the n × m matrix be partitioned into sub-matrices A11, A12, A21, A22, Similarly partition the m × k matrix

Product of Blocked Matrices Then

The Inverse of Blocked Matrices Let the n × n matrix be partitioned into sub-matrices A11, A12, A21, A22, Similarly partition the n × n matrix Suppose that B = A-1

Product of Blocked Matrices Then

Hence From (1) From (3)

Hence or using the Woodbury Theorem Similarly

From and similarly

Summarizing Let Suppose that A-1 = B then

Example Let Find A-1 = B

The transpose of a matrix Consider the n × m matrix, A then the m × n matrix, (also denoted by AT) is called the transpose of A

Symmetric Matrices An n × n matrix, A, is said to be symmetric if Note:

The trace and the determinant of a square matrix Let A denote then n × n matrix Then

also where

Some properties

Some additional Linear Algebra

Inner product of vectors Let denote two p × 1 vectors. Then.

Note: Let denote two p × 1 vectors. Then.

Note: Let denote two p × 1 vectors. Then.

Special Types of Matrices Orthogonal matrices A matrix is orthogonal if PˊP = PPˊ = I In this cases P-1=Pˊ . Also the rows (columns) of P have length 1 and are orthogonal to each other

Suppose P is an orthogonal matrix then Let denote p × 1 vectors. Orthogonal transformation preserve length and angles – Rotations about the origin, Reflections

Example The following matrix P is orthogonal

Special Types of Matrices (continued) Positive definite matrices A symmetric matrix, A, is called positive definite if: A symmetric matrix, A, is called positive semi definite if:

If the matrix A is positive definite then

Theorem The matrix A is positive definite if

Example

Special Types of Matrices (continued) Idempotent matrices A symmetric matrix, E, is called idempotent if: Idempotent matrices project vectors onto a linear subspace

Example

Example (continued)

Vector subspaces of n

Let n denote all n-dimensional vectors (n-dimensional Euclidean space). Let M denote any subset of n. Then M is a vector subspace of n if: M If M and M then M If M then M .

Example 1 of vector subspace Let M where is any n-dimensional vector Example 1 of vector subspace Let M where is any n-dimensional vector. Then M is a vector subspace of n. Note: M is an (n - 1)-dimensional plane through the origin.

Proof Now M

Projection onto M. Let be any vector M

Example 2 of vector subspace Let M Then M is a vector subspace of n Example 2 of vector subspace Let M Then M is a vector subspace of n. M is called the vector space spanned by the p n -dimensional vectors: M is a the plane of smallest dimension through the origin that contains the vectors:

Eigenvectors, Eigenvalues of a matrix

Definition Let A be an n × n matrix Let then l is called an eigenvalue of A and and is called an eigenvector of A and

Note:

= polynomial of degree n in l. Hence there are n possible eigenvalues l1, … , ln

Thereom If the matrix A is symmetric then the eigenvalues of A, l1, … , ln,are real. Thereom If the matrix A is positive definite then the eigenvalues of A, l1, … , ln, are positive. Proof A is positive definite if Let be an eigenvalue and corresponding eigenvector of A.

Thereom If the matrix A is symmetric and the eigenvalues of A are l1, … , ln, with corresponding eigenvectors If li ≠ lj then Proof: Note

Thereom If the matrix A is symmetric with distinct eigenvalues, l1, … , ln, with corresponding eigenvectors Assume

proof Note and P is called an orthogonal matrix

therefore thus

Comment The previous result is also true if the eigenvalues are not distinct. Namely if the matrix A is symmetric with eigenvalues, l1, … , ln, with corresponding eigenvectors of unit length

An algorithm for computing eigenvectors, eigenvalues of positive definite matrices Generally to compute eigenvalues of a matrix we need to first solve the equation for all values of l. |A – lI| = 0 (a polynomial of degree n in l) Then solve the equation for the eigenvector

Recall that if A is positive definite then It can be shown that and that

Thus for large values of m The algorithim Compute powers of A - A2 , A4 , A8 , A16 , ... Rescale (so that largest element is 1 (say)) Continue until there is no change, The resulting matrix will be Find

To find Repeat steps 1 to 5 with the above matrix to find Continue to find

Example A =

Differentiation with respect to a vector, matrix

Differentiation with respect to a vector Let denote a p × 1 vector. Let denote a function of the components of .

Rules 1. Suppose

2. Suppose

Example 1. Determine when is a maximum or minimum. solution

2. Determine when is a maximum if Assume A is a positive definite matrix. solution l is the Lagrange multiplier. This shows that is an eigenvector of A. Thus is the eigenvector of A associated with the largest eigenvalue, l.

Differentiation with respect to a matrix Let X denote a q × p matrix. Let f (X) denote a function of the components of X then:

Example Let X denote a p × p matrix. Let f (X) = ln |X| Solution Note Xij are cofactors = (i,j)th element of X-1

Example Let X and A denote p × p matrices. Let f (X) = tr (AX) Solution

Differentiation of a matrix of functions Let U = (uij) denote a q × p matrix of functions of x then:

Rules:

Proof:

Proof:

Proof:

The Generalized Inverse of a matrix

A-1 does not exist for all matrices A Recall B (denoted by A-1) is called the inverse of A if AB = BA = I A-1 does not exist for all matrices A A-1 exists only if A is a square matrix and |A| ≠ 0 If A-1 exists then the system of linear equations has a unique solution

Definition B (denoted by A-) is called the generalized inverse (Moore – Penrose inverse) of A if 1. ABA = A 2. BAB = B 3. (AB)' = AB 4. (BA)' = BA Note: A- is unique Proof: Let B1 and B2 satisfying 1. ABiA = A 2. BiABi = Bi 3. (ABi)' = ABi 4. (BiA)' = BiA

Hence B1 = B1AB1 = B1AB2AB1 = B1 (AB2)'(AB1) ' = B1B2'A'B1'A'= B1B2'A' = B1AB2 = B1AB2AB2 = (B1A)(B2A)B2 = (B1A)'(B2A)'B2 = A'B1'A'B2'B2 = A'B2'B2= (B2A)'B2 = B2AB2 = B2 The general solution of a system of Equations The general solution where is arbitrary

Suppose a solution exists Let

Calculation of the Moore-Penrose g-inverse Let A be a p×q matrix of rank q < p, Proof thus also

Let B be a p×q matrix of rank p < q, Proof thus also

Let C be a p×q matrix of rank k < min(p,q), then C = AB where A is a p×k matrix of rank k and B is a k×q matrix of rank k Proof is symmetric, as well as