Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Lecture 19 Singular Value Decomposition
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
Orthogonality and Least Squares
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Dirac Notation and Spectral decomposition
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Matrices CS485/685 Computer Vision Dr. George Bebis.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
AN ORTHOGONAL PROJECTION
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Orthogonality and Least Squares
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Stats & Summary. The Woodbury Theorem where the inverses.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Chapter 5 Eigenvalues and Eigenvectors
Introduction to Vectors and Matrices
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
CS479/679 Pattern Recognition Dr. George Bebis
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
CHAPTER 8.9 ~ 8.16 Matrices.
Systems of First Order Linear Equations
Some useful linear algebra
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Orthogonality and Least Squares
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Chapter 3 Linear Algebra
Linear Algebra Lecture 39.
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
Introduction to Vectors and Matrices
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Algebra Lecture 41.
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Orthogonality and Least Squares
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Lecture XXVII

Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m. For mathematical simplicity, we may want to form an orthogonal basis for this space. One way to form such a basis is the Gram-Schmit orthonormalization. In this procedure, we want to generate a new set of vectors {y 1,…y r } that are orthonormal.

The Gram-Schmit process is:

Example

The vectors can then be normalized to one. However, to test for orthogonality:

Theorem 2.13 Every r-dimensional vector space, except the zero-dimensional space {0}, has an orthonormal basis.

Theorem 2.14 Let {z 1,…z r } be an orthornomal basis for some vector space S, of R m. Then each x  R m can be expressed uniquely as were u  S and v is a vector that is orthogonal to every vector in S.

Definition 2.10 Let S be a vector subspace of R m. The orthogonal complement of S, denoted S , is the collection of all vectors in R m that are orthogonal to every vector in S: That is, S  ={x:x  R m and x’y=0 for all y  S}. Theorem If S is a vector subspace of R m then its orthogonal complement S  is also a vector subspace of R m.

Projection Matrices The orthogonal projection of an m x 1 vector x onto a vector space S can be expressed in matrix form. Let {z 1,…z r } be any othonormal basis for S while {z 1,…z m } is an orthonormal basis for R m. Any vector x can be written as:

Aggregating   ’,  2 ’)’ where  1 =(  1 …  r )’ and  2 =(  r+1 …  m )’ and assuming a similar decomposition of Z=[Z 1 Z 2 ], the vector x can be written as: given orthogonality, we know that Z 1 ’Z 1 =I r and Z 1 ’Z 2 =(0), and so

Theorem 2.17 Suppose the columns of the m x r matrix Z 1 from an orthonormal basis for the vector space S which is a subspace of R m. If x  R m, the orthogonal projection of x onto S is given by Z 1 Z 1 ’x. Projection matrices allow the division of the space into a spanned space and a set of orthogonal deviations from the spanning set. One such separation involves the Gram-Schmit system.

In general, if we define the m x r matrix X 1 =(x 1,…x r ) and define the linear transformation of this matrix that produces an orthonormal basis as A, so that: we are left with the result that:

Given that the matrix A is nonsingular, the projection matrix that maps any vector x onto the spanning set then becomes:

Ordinary least squares is also a spanning decomposition. In the traditional linear model: within this formulation  is chosen to minimize the error between y and estimated y:

This problem implies minimizing the distance between the observed y and the predicted plane X , which implies orthogonality. If X has full column rank, the projection space becomes X(X’X) -1 X’ and the projection then becomes:

Premultiplying each side by X’ yields:

Idempotent matrices can be defined as any matrix such that AA=A. Note that the sum of square errors under the projection can be expressed as:

In general, the matrix I n -X(X’X) -1 X’ is referred to as an idempotent matrix. An idempotent matrix is one that AA=A:

Thus, the SSE can be expressed as: which is the sum of the orthogonal errors from the regression

Eigenvalues and Eigenvectors Eigenvalues and eigenvectors (or more appropriately latent roots and characteristic vectors) are defined by the solution for a nonzero x. Mathematically, we can solve for the eigenvalue by rearranging the terms:

Solving for then involves solving the characteristic equation that is implied by: Again using the matrix in the previous example:

In general, there are m roots to the characteristic equation. Some of these roots may be the same. In the above case, the roots are complex. Turning to another example:

The eigenvectors are then determined by the linear dependence in A- I matrix. Taking the last example: Obviously, the first and second rows are linear. The reduced system then implies that as long as x 1 =x 2 and x 3 =0, the resulting matrix is zero.

Theorem For any symmetric matrix, A, there exists an orthogonal matrix H (that is, a square matrix satisfying H’H=I) wuch that: where  is a diagonal matrix. The diagonal elements of  are called the characteristic roots (or eigenvalues) of A. The i th column of H is called the characteristic vector (or eigenvector) of A corresponding to the characteristic root of A.

This proof follows directly from the definition of eigenvalues. Letting H be a matrix with eigenvalues in the columns it is obvious that

Kronecker Products Two special matrix operations that you will encounter are the Kronecker product and vec() operators. The Kronecker product is a matrix is an element by element multiplication of the elements of the first matrix by the entire second matrix:

The vec(.) operator then involves stacking the columns of a matrix on top of one another.