Download presentation
Presentation is loading. Please wait.
Published byRoderick White Modified over 8 years ago
1
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella
2
Matrices A matrix is a rectangular array of numbers (also called scalars), written between square brackets, as in
3
Vectors A vector is defined as a matrix with only one column or row Row vector Column vector or vector
4
Zero and identity matrices The zero matrix (of size m X n) is the matrix with all entries equal to zero An identity matrix is always square and its diagonal entries are all equal to one, otherwise are zero. Identity matrices are denoted by the letter I.
5
Vector Operations The inner product (a.k.a. dot product or scalar product) of two vectors is defined by The magnitude of a vector is
6
Vector Operations The projection of vector y onto vector x is where vector ux has unit magnitude and the same direction as x
7
Vector Operations The angle between vectors x and y is Two vectors x and y are said to be orthogonal if x T y=0 orthonormal if x T y=0 and |x|=|y|=1
8
Vector Operations A set of vectors x 1, x 2, …, x n are said to be linearly dependent if there exists a set of coefficients a1, a2, …, an (at least one different than zero) such that A set of vectors x 1, x 2, …, x n are said to be linearly independent if
9
Matrix Operations Matrix transpose If A is an m X n matrix, its transpose, denoted A T, is the n X m matrix given by (A T ) ij = A ji. For example,
10
Matrix Operations Matrix addition Two matrices of the same size can be added together, to form another matrix (of the same size), by adding the corresponding entries
11
Matrix Operations Scalar multiplication The multiplication of a matrix by a scalar (i.e., number), is done by multiplying every entry of the matrix by the scalar
12
Matrix Operations Matrix multiplication You can multiply two matrices A and B provided their dimensions are compatible, which means the number of columns of A equals the number of rows of B. Suppose that A has size m X p and B has size p X n. The product matrix C = AB, which has size m X n, is defined by
13
Matrix Operations The trace of a square matrix A d×d is the sum of its diagonal elements The rank of a matrix is the number of linearly independent rows (or columns) A square matrix is said to be non-singular if and only if its rank equals the number of rows (or columns) A non-singular matrix has a non-zero determinant
14
Matrix Operations A square matrix is said to be orthonormal if AA T =A T A=I For a square matrix A if x T Ax>0 for all x≠0, then A is said to be positive-definite (i.e., the covariance matrix) if x T Ax≥0 for all x≠0, then A is said to be positive-semidefinite
15
Matrix inverse If A is square, and there is a matrix F such that FA = I, then we say that A is invertible or nonsingular. We call F the inverse of A, and denote it A -1. We can then also define A -k = (A -1 ) k. If a matrix is not invertible, we say it is singular or noninvertible.
16
Matrix Operations The pseudo-inverse matrix A† is typically used whenever A-1 does not exist (because A is not square or A is singular):
17
Matrix Operations The n-dimensional space in which all the n- dimensional vectors reside is called a vector space A set of vectors {u1, u2,... un} is said to form a basis for a vector space if any arbitrary vector x can be represented by a linear combination of the {ui}
18
Matrix Operations The coefficients {a1, a2,... an} are called the components of vector x with respect to the basis {ui} In order to form a basis, it is necessary and sufficient that the {ui} vectors are linearly independent
19
Matrix Operations A basis {ui} is said to be orthogonal if A basis {ui} is said to be orthonormal if
20
Linear Transformations A linear transformation is a mapping from a vector space X N onto a vector space Y M, and is represented by a matrix Given vector x ∈ X N, the corresponding vector y on Y M is computed as A linear transformation represented by a square matrix A is said to be orthonormal when AA T =A T A=I
21
Eigenvectors and Eigenvalues Let A be any square matrix. A scalar is called and eigenvalue of A if there exists a non zero vector v such that: Av=v Any vector v satisfying this relation is called and eigenvector of A belonging to the eigenvalue of
22
How to compute the Eigenvalues and the Eigenvectors Find the characteristic polynomial (t) of A. Find the roots of (t) to obtain the eigenvalues of A. Repeat (a) and (b) for each eigenvalue of A. a. Form the matrix M=A-I by subtracting down the diagonal A. b. Find the basis for the solution space of the homogeneous system MX=0. (These basis vectors are linearly independent eigenvectors of A belonging to.)
23
Example We have a matrix The characteristic polynomial (t) of A is computed. We have
24
Example Set (t)=(t-5)(t+2)=0. The roots 1 =5 and 2 =-2 are the eigenvalues of A. We find an eigenvector v 1 of A belonging to the eigenvalue 1 =5
25
Example We find the eigenvector v 2 of A belonging to the eigenvalue 2 =-2 The system has only one independent solution then v 2 =(-1,3)
26
The product of the eigenvalues is equal to the determinant of A The sum of the eigenvalues is equal to the trace of A If the eigenvalues of A are λ i, and A is invertible, then the eigenvalues of A -1 are simply λ i -1. If the eigenvalues of A are λ i, then the eigenvalues of f(A) are simply f(λ i ), for any holomorphic function f. Properties of the Eigenvalues
27
Properties of the eigenvectors The eigenvectors of A -1 are the same as the eigenvectors of A The eigenvectors of f(A) are the same as the eigenvectors of A If A is (real) symmetric, then N v =N, the eigenvectors are real, mutually orthogonal and provide a basis for R n.
28
Properties of the eigendecomposition A can be eigendecomposed if and only if N v =N If p(λ) has no repeated roots, i.e. N λ =N, then A can be eigendecomposed. The statement "A can be eigendecomposed" does not imply that A has an inverse. The statement "A has an inverse" does not imply that A can be eigendecomposed.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.