Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

5.1 Real Vector Spaces.
Eigenvalues and Eigenvectors
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Matrix Operations. Matrix Notation Example Equality of Matrices.
Orthogonality and Least Squares
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Last lecture summary independent vectors x
Last lecture summary Fundamental system in linear algebra : system of linear equations Ax = b. nice case – n equations, n unknowns matrix notation row.
Lecture 10: Inner Products Norms and angles Projection Sections 2.10.(1-4), Sections 2.2.3, 2.3.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
259 Lecture 14 Elementary Matrix Theory. 2 Matrix Definition  A matrix is a rectangular array of elements (usually numbers) written in rows and columns.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Linear Algebra Lecture 25.
Chapter 5: The Orthogonality and Least Squares
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
Section 2.3 Properties of Solution Sets
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Matrices, Vectors, Determinants.
Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 13 th October 2014 DR TANIA STATHAKI READER (ASSOCIATE.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
MTH108 Business Math I Lecture 20.
Elementary Matrix Theory
7.3 Linear Systems of Equations. Gauss Elimination
Eigenvalues and Eigenvectors
MAT 322: LINEAR ALGEBRA.
Matrices and Vector Concepts
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Linear Algebra Lecture 39.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Maths for Signals and Systems Linear Algebra in Engineering Lectures 16-17, Tuesday 18th November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Mathematics for Signals and Systems
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
Linear Algebra Lecture 20.
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
Elementary Linear Algebra Anton & Rorres, 9th Edition
I.4 Polyhedral Theory.
Maths for Signals and Systems Linear Algebra in Engineering Lecture 9, Friday 1st November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 21st November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 19-20, Tuesday 25st November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Maths for Signals and Systems Linear Algebra in Engineering Class 1, Friday 28th November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Eigenvalues and Eigenvectors
Vector Spaces RANK © 2012 Pearson Education, Inc..
Linear Vector Space and Matrix Mechanics
Eigenvalues and Eigenvectors
Presentation transcript:

Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

Mathematics for Signals and Systems In this set of lectures we will talk about: Spaces other than vector spaces. Orthogonal subspaces. Projections on to spaces. How to solve the problem 𝐴𝑥=𝑏 when 𝑏 does not belong in the column space of 𝐴. Lease Squares Minimization or Linear Regression.

Mathematics for Signals and Systems Generalization of the concept of space Consider a set of entities (objects) that are not necessarily 1−𝐷 vectors. Assume that multiplication with a scalar and addition can be defined for these entities. A new space can be defined from all linear combinations of such entities. Example: Space of matrices Consider a space 𝑀 which contains all 3×3 matrices. Addition and multiplication with a scalar can be applied to matrices, e.g., if 𝐴 and 𝐵 are matrices, then 𝐴+𝐵 and 𝑐𝐴 are matrices too. Examples of subspaces of 𝑀: All upper triangular matrices. All lower triangular matrices. All symmetric matrices. All diagonal matrices.

Mathematics for Signals and Systems-Space of Matrices Consider the space 𝑀 of all 3×3 matrices. The dimension of this space is 9, i.e., dim 𝑀 =9. The simplest basis of this space is: 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 … 0 0 0 0 0 0 0 0 1 Any 3×3 matrix can be written as a linear combination of the above matrices. Consider the subspace 𝑆 of all 3×3 symmetric matrices. (Keep in mind that symmetry implies square matrices only). The dimension of this subspace is 6, i.e., dim 𝑆 =6. The simplest basis of this subspace is: 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 Any 3×3 symmetric matrix can be written as a linear combination of the above matrices.

Space of Matrices cont. Consider the subspace 𝑈 of all 3×3 upper triangular matrices. The dimension of this subspace is 6, i.e., dim 𝑈 =6. The simplest basis of this subspace is: 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 Consider the subspace of the intersection of symmetric and upper triangular matrices 𝑆∩𝑈. This subspace consists of diagonal matrices. The dimension of this subspace is 3, i.e., dim 𝑆∩𝑈 =3. A basis is: 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 Note that the union 𝐒∪𝐔 is not a subspace.

Space of Matrices cont. As mentioned the union 𝑆∪𝑈 is not a subspace. Lets consider 𝑆+𝑈. Question: What matrices can I get from 𝑆+𝑈 where both 𝑆 and 𝑈 are of dimension 𝑛×𝑛? Answer: I can basically get any 𝑛×𝑛 matrix. Can you possibly prove why? We notice that for 𝑛=3 we have: dim 𝑆+𝑈 =9 dim 𝑆 =6 dim 𝑈 =6 In general we can prove that: dim 𝑆 + dim 𝑈 = dim 𝑆+𝑈 +dim(𝑆∩𝑈)

Rank One Matrices An example of a 2×3 matrix of rank equal to 1 is: 𝐴= 1 4 5 2 8 10 A basis of the row space of the above matrix is the vector 1 4 5 . A basis of the column space of the above matrix is the vector 1 2 . Note that the well known property holds dim𝐶(𝐴) =𝑟=dim𝐶(𝐴 𝑇 ). In this example 𝑟=1.

Rank One Matrices cont. We can write the previous matrix 𝐴 as: 𝐴= 1 4 5 2 8 10 = 1 2 1 4 5 . Every rank one matrix can be decomposed into the product of a column vector and a row vector 𝐴=𝑣 𝑤 𝑇 , with 𝑣 and 𝑤 column vectors. Furthermore, if 𝑣 and 𝑤 are column vectors, the matrix 𝑣 𝑤 𝑇 is always of rank 1. All matrices can be written as a combination of rank one matrices. Rank one matrices are the building blocks for all matrices!

Orthogonal subspaces: Definition and a couple of questions Suppose that a subspace 𝑆 is orthogonal to a subspace 𝑇. What does this mean? It actually means that every vector in 𝑆 is orthogonal to every vector in 𝑇. Assume we are in 𝑅 3 . Consider two 2-D planes which are perpendicular to each other (for example a wall and the floor). Are these planes orthogonal? (The answer is NO: the line which is their intersection belongs to both) Assume we are in 𝑅 2 . When is a line through the origin orthogonal to the entire plane? When is a line through the origin orthogonal to the 0 subspace? When is a line through the origin orthogonal to another line through the origin? (answers: 1. never, 2. always, 3. when they form a 𝟗𝟎 𝒐 angle)

Orthogonal subspaces: more questions The row and null space of a matrix 𝐴 of size 𝑚×𝑛 are orthogonal. Why? Vectors which satisfy 𝐴𝑥=𝟎 are orthogonal to all rows of 𝐴. Assume we are in 𝑅 3 : consider two orthogonal lines. Could their subspaces be the row space and the null space of a matrix? (The answer is NO: dim𝑪 𝑨 𝑻 +dim𝑵 𝑨 =𝒓+ 𝟑−𝒓 =𝟑) Row space and null space are orthogonal and furthermore: Their dimensions add up to the size of rows (or number of columns). This is basically the size of the maximum space that the rows can form. Row space and null space are orthogonal complements in 𝑅 𝑛 .

Solve a system when there is no exact solution “Solve” 𝐴𝑥=𝑏 when there is no exact solution, i.e., 𝑏∉𝐶 𝐴 . We will realize, in subsequent sections, that there is a matrix which will play an important role for the solution to this problem: This is 𝐴 𝑇 𝐴. It has the following properties: It is square of size 𝑛×𝑛 (𝐴 is of size 𝑚×𝑛). It is symmetric because 𝐴 𝑇 𝐴 𝑇 = 𝐴 𝑇 ( 𝐴 𝑇 ) 𝑇 = 𝐴 𝑇 𝐴. It is invertible if 𝐴 has 𝑛 independent columns, i.e., the null space of 𝐴 is zero (proof follows later). Example 1: 𝐴= 1 1 1 1 2 5 . Show that 𝐴 𝑇 𝐴 is invertible. Example 2: 𝐴= 1 3 1 1 3 3 . Show that 𝐴 𝑇 𝐴 is NOT invertible.

Projection matrix PROBLEM: Find the projection of a vector 𝑏 onto a line 𝑎. We need this to solve the problem 𝐴𝑥=𝑏, 𝑏∉𝐶 𝐴 . (All vectors are assumed column vectors.) This is a point 𝑝 on the straight line formed by vector 𝑎, such that 𝑝=𝑥𝑎, where 𝑥 is a scalar. 𝑝 is defined such as the vector 𝑒=𝑏−𝑝 is orthogonal to the line formed by vector 𝑎. Inner product 𝑎 𝑇 𝑒= 𝑎 𝑇 𝑏−𝑝 = 𝑎 𝑇 𝑏−𝑥𝑎 =0 or 𝑥= 𝑎 𝑇 𝑏 𝑎 𝑇 𝑎 . Based on the above 𝑝= 𝑎 𝑇 𝑏 𝑎 𝑇 𝑎 𝑎= 𝑎 𝑎 𝑇 𝑎 𝑇 𝑎 𝑏=𝑃𝑏 with 𝑃= 𝑎 𝑎 𝑇 𝑎 𝑇 𝑎 (observe the form of 𝑷). Note that 𝑎 𝑇 𝑎 is a scalar and 𝑎 𝑎 𝑇 is a square matrix. 𝑃 is called the projection matrix.

Projection matrix properties What is the column space of 𝑷? It is the subspace formed by vector 𝑎. The reason is that if 𝑎= 𝑎 1 𝑎 2 𝑇 then 𝑃= 1 𝑎 2 𝑎 1 2 𝑎 1 𝑎 2 𝑎 1 𝑎 2 𝑎 2 2 and therefore, 𝐶 𝑃 : 𝑐 1 𝑎 1 2 𝑎 1 𝑎 2 + 𝑐 2 𝑎 1 𝑎 2 𝑎 2 2 = 𝑐 1 𝑎 1 + 𝑐 2 𝑎 2 𝑐 1 𝑎 1 + 𝑐 2 𝑎 2 𝑎 1 𝑎 2 =𝑐 𝑎 1 𝑎 2 . What is the rank of 𝑷? Obviously 1. What happens if I do the projection twice? Nothing should happen. Therefore, the condition 𝑃 2 =𝑃 must hold. Verify this using the above 2×2 matrix. If we double 𝒃 what happens to the projection? The projection is doubled too. If we double 𝒂 what happens to the projection? Nothing should happen to the projection in that case.

Solving Ax=b using projections Consider again the scenario where the system 𝐴𝑥=𝑏 doesn’t have solution. Goal: Solve 𝐴 𝑥 =𝑝 instead, where 𝑝 is the projection of 𝑏 onto the column space of 𝐴. As already mentioned, the projection 𝑝 of 𝑏 onto the column space of 𝐴 is found by forcing the error 𝑒=𝑏−𝑝 to be perpendicular to the column space of 𝐴. This scenario is depicted in the figure below for 𝑅 3 . The quantities shown are column vectors and 𝐴= 𝑎 1 𝑎 2 .

Projection error, solution and projection matrix The error 𝑒 is perpendicular to 𝑎 1 and 𝑎 2 . 𝑎 1 𝑇 𝑏−𝐴 𝑥 =0 and 𝑎 2 𝑇 𝑏−𝐴 𝑥 =0 𝑎 1 𝑇 𝑎 2 𝑇 𝑏−𝐴 𝑥 =𝟎⇒ 𝐴 𝑇 𝑏−𝐴 𝑥 =𝟎 Question: Consider 𝐴 𝑇 𝑏−𝐴 𝑥 =𝟎. In which subspace does 𝑒= 𝑏−𝐴 𝑥 belong? Answer: Since 𝐴 𝑇 𝑏−𝐴 𝑥 =𝟎, we observe that 𝑒∈𝑁( 𝐴 𝑇 ) (left nullspace). Therefore 𝑒 is perpendicular to the column space of 𝐴. Solution of the “new” system: 𝐴 𝑇 𝐴 𝑥 =𝐴 𝑇 𝑏 If 𝐴 𝑇 𝐴 is invertible then: 𝑥 = 𝐴 𝑇 𝐴 −1 𝐴 𝑇 𝑏 Projection: 𝑝=𝐴 𝑥 =𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 𝑏=𝑃𝑏 Projection matrix: 𝑃=𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇

Projection matrix Projection matrix: 𝑃=𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 In general, 𝐴 is not square (it is rectangular) and therefore, we cannot use the property 𝐴 𝑇 𝐴 −1 = 𝐴 −1 𝐴 𝑇 −1 . Question: If 𝐴 was a square and invertible matrix of size 𝑛×𝑛 what would 𝑃 be? Answer: In that case the column space of 𝐴 would be the entire 𝑅 𝑛 and therefore the vector 𝑏 would belong to 𝐶(𝐴). Therefore, the projection of any vector 𝑏 on 𝐶(𝐴) would be the vector itself. This can be also verified by: 𝑃=𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 = 𝐴𝐴 −1 𝐴 𝑇 −1 𝐴 𝑇 =𝐼∙𝐼=𝐼

Projection matrix: Properties Properties of 𝑃. Symmetric 𝐴 𝑇 𝐴 symmetric and therefore 𝐴 𝑇 𝐴 −1 is symmetric [to prove this we use the property 𝐴 −1 𝑇 = 𝐴 𝑇 −1 ] 𝑃 𝑇 = 𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 𝑇 = 𝐴 𝑇 𝑇 𝐴 𝑇 𝐴 −1 𝑇 𝐴 𝑇 =𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 =𝑃 𝑃 2 =𝑃 𝑃 2 =𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 =𝐴 𝐴 𝑇 𝐴 −1 [𝐴 𝑇 𝐴 𝐴 𝑇 𝐴 −1 ]𝐴 𝑇 = =𝐴 𝐴 𝑇 𝐴 −1 𝐼 𝐴 𝑇 =𝐴 𝐴 𝑇 𝐴 −1 𝐴 𝑇 =𝑃

Least Squares Minimization Problem: Show that the proposed approach 𝐴 𝑥 =𝑝 yields the solution which can be also obtained if we look for an 𝑥 that minimizes the function: 𝐴 𝑥 −𝑏 2 = 𝑒 2 Solution: We are looking to minimize the function 𝐴 𝑥 −𝑏 2 = 𝑒 2 = 𝐴 𝑥 −𝑏 𝑇 (𝐴 𝑥 −𝑏). This is a function of 𝑥 only and therefore, we can formulate the problem: min⁡𝑓 𝑥 =min⁡[ 𝐴 𝑥 −𝑏 𝑇 𝐴 𝑥 −𝑏 ] A function is minimized at points for which its first derivative is zero. Therefore, we are looking for the 𝑥 s that satisfy the equation: 𝜕𝑓( 𝑥 ) 𝜕 𝑥 =𝟎⇒ 𝜕 𝜕 𝑥 𝐴 𝑥 −𝑏 𝑇 𝐴 𝑥 −𝑏 =𝟎 𝐴 𝑥 −𝑏 𝑇 𝐴 𝑥 −𝑏 = 𝐴 𝑥 𝑇 − 𝑏 𝑇 𝐴 𝑥 −𝑏 = 𝑥 𝑇 𝐴 𝑇 − 𝑏 𝑇 𝐴 𝑥 −𝑏 = 𝑥 𝑇 𝐴 𝑇 𝐴 𝑥 − 𝑥 𝑇 𝐴 𝑇 𝑏− 𝑏 𝑇 𝐴 𝑥 + 𝑏 𝑇 𝑏 𝜕 𝜕 𝑥 ( 𝑥 𝑇 𝐴 𝑇 𝐴 𝑥 − 𝑥 𝑇 𝐴 𝑇 𝑏− 𝑏 𝑇 𝐴 𝑥 + 𝑏 𝑇 𝑏)=2 𝐴 𝑇 𝐴 𝑥 − 𝐴 𝑇 𝑏 −𝐴 𝑇 𝑏+𝟎=2 𝐴 𝑇 𝐴 𝑥 −2 𝐴 𝑇 𝑏 Therefore, the derivative is zero if 𝐴 𝑇 𝐴 𝑥 = 𝐴 𝑇 𝑏. This is equal to the previous solution. This approach is called Least Squares Minimization.