Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.

Slides:



Advertisements
Similar presentations
Applied Informatics Štefan BEREŽNÝ
Advertisements

Chapter 6 Eigenvalues and Eigenvectors
Matrices A matrix is a rectangular array of quantities (numbers, expressions or function), arranged in m rows and n columns x 3y.
3_3 An Useful Overview of Matrix Algebra
MF-852 Financial Econometrics
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 2 Matrices Definition of a matrix.
Ch 7.2: Review of Matrices For theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Matrices CS485/685 Computer Vision Dr. George Bebis.
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Intro to Matrices Don’t be scared….
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Stats & Linear Models.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Linear algebra: matrix Eigen-value Problems
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Prepared by Deluar Jahan Moloy Lecturer Northern University Bangladesh
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
ES 240: Scientific and Engineering Computation. Chapter 8 Chapter 8: Linear Algebraic Equations and Matrices Uchechukwu Ofoegbu Temple University.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Matrices and Determinants
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
MATRICES A rectangular arrangement of elements is called matrix. Types of matrices: Null matrix: A matrix whose all elements are zero is called a null.
Matrices. Matrix A matrix is an ordered rectangular array of numbers. The entry in the i th row and j th column is denoted by a ij. Ex. 4 Columns 3 Rows.
EE611 Deterministic Systems Vector Spaces and Basis Changes Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
College Algebra Chapter 6 Matrices and Determinants and Applications
Matrices and Vector Concepts
Matrices and Matrix Operations
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Matrices and Vectors Review Objective
Systems of First Order Linear Equations
CS485/685 Computer Vision Dr. George Bebis
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella

Matrices  A matrix is a rectangular array of numbers (also called scalars), written between square brackets, as in

Vectors  A vector is defined as a matrix with only one column or row Row vector Column vector or vector

Zero and identity matrices  The zero matrix (of size m X n) is the matrix with all entries equal to zero  An identity matrix is always square and its diagonal entries are all equal to one, otherwise are zero. Identity matrices are denoted by the letter I.

Vector Operations  The inner product (a.k.a. dot product or scalar product) of two vectors is defined by  The magnitude of a vector is

Vector Operations  The projection of vector y onto vector x is where vector ux has unit magnitude and the same direction as x

Vector Operations  The angle between vectors x and y is  Two vectors x and y are said to be orthogonal if x T y=0 orthonormal if x T y=0 and |x|=|y|=1

Vector Operations  A set of vectors x 1, x 2, …, x n are said to be linearly dependent if there exists a set of coefficients a1, a2, …, an (at least one different than zero) such that  A set of vectors x 1, x 2, …, x n are said to be linearly independent if

Matrix Operations Matrix transpose  If A is an m X n matrix, its transpose, denoted A T, is the n X m matrix given by (A T ) ij = A ji. For example,

Matrix Operations Matrix addition  Two matrices of the same size can be added together, to form another matrix (of the same size), by adding the corresponding entries

Matrix Operations Scalar multiplication  The multiplication of a matrix by a scalar (i.e., number), is done by multiplying every entry of the matrix by the scalar

Matrix Operations Matrix multiplication You can multiply two matrices A and B provided their dimensions are compatible, which means the number of columns of A equals the number of rows of B. Suppose that A has size m X p and B has size p X n. The product matrix C = AB, which has size m X n, is defined by

Matrix Operations  The trace of a square matrix A d×d is the sum of its diagonal elements  The rank of a matrix is the number of linearly independent rows (or columns) ‏  A square matrix is said to be non-singular if and only if its rank equals the number of rows  (or columns) ‏ A non-singular matrix has a non-zero determinant

Matrix Operations  A square matrix is said to be orthonormal if AA T =A T A=I  For a square matrix A if x T Ax>0 for all x≠0, then A is said to be positive-definite (i.e., the covariance matrix) ‏ if x T Ax≥0 for all x≠0, then A is said to be positive-semidefinite

Matrix inverse  If A is square, and there is a matrix F such that FA = I, then we say that A is invertible or nonsingular.  We call F the inverse of A, and denote it A -1. We can then also define A -k = (A -1 ) k. If a matrix is not invertible, we say it is singular or noninvertible.

Matrix Operations  The pseudo-inverse matrix A† is typically used whenever A-1 does not exist (because A is not square or A is singular):

Matrix Operations  The n-dimensional space in which all the n- dimensional vectors reside is called a vector space  A set of vectors {u1, u2,... un} is said to form a basis for a vector space if any arbitrary vector x can be represented by a linear combination of the {ui}

Matrix Operations  The coefficients {a1, a2,... an} are called the components of vector x with respect to the basis {ui}  In order to form a basis, it is necessary and sufficient that the {ui} vectors are linearly independent

Matrix Operations  A basis {ui} is said to be orthogonal if  A basis {ui} is said to be orthonormal if

Linear Transformations  A linear transformation is a mapping from a vector space X N onto a vector space Y M, and is represented by a matrix Given vector x ∈ X N, the corresponding vector y on Y M is computed as A linear transformation represented by a square matrix A is said to be orthonormal when AA T =A T A=I

Eigenvectors and Eigenvalues  Let A be any square matrix. A scalar is called and eigenvalue of A if there exists a non zero vector v such that: Av=v  Any vector v satisfying this relation is called and eigenvector of A belonging to the eigenvalue of

How to compute the Eigenvalues and the Eigenvectors Find the characteristic polynomial (t) of A. Find the roots of (t) to obtain the eigenvalues of A. Repeat (a) and (b) for each eigenvalue of A. a. Form the matrix M=A-I by subtracting down the diagonal A. b. Find the basis for the solution space of the homogeneous system MX=0. (These basis vectors are linearly independent eigenvectors of A belonging to.) ‏

Example  We have a matrix  The characteristic polynomial (t) of A is computed. We have

Example  Set (t)=(t-5)(t+2)=0. The roots 1 =5 and 2 =-2 are the eigenvalues of A.  We find an eigenvector v 1 of A belonging to the eigenvalue 1 =5

Example  We find the eigenvector v 2 of A belonging to the eigenvalue 2 =-2  The system has only one independent solution then v 2 =(-1,3) ‏

 The product of the eigenvalues is equal to the determinant of A  The sum of the eigenvalues is equal to the trace of A  If the eigenvalues of A are λ i, and A is invertible, then the eigenvalues of A -1 are simply λ i -1.  If the eigenvalues of A are λ i, then the eigenvalues of f(A) are simply f(λ i ), for any holomorphic function f. Properties of the Eigenvalues

Properties of the eigenvectors  The eigenvectors of A -1 are the same as the eigenvectors of A  The eigenvectors of f(A) are the same as the eigenvectors of A  If A is (real) symmetric, then N v =N, the eigenvectors are real, mutually orthogonal and provide a basis for R n.

Properties of the eigendecomposition  A can be eigendecomposed if and only if N v =N  If p(λ) has no repeated roots, i.e. N λ =N, then A can be eigendecomposed.  The statement "A can be eigendecomposed" does not imply that A has an inverse.  The statement "A has an inverse" does not imply that A can be eigendecomposed.