Introduction to Matrices

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Section 1.7 Diagonal, Triangular, and Symmetric Matrices.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Chapter 9 Gauss Elimination The Islamic University of Gaza
MF-852 Financial Econometrics
Dan Witzner Hansen  Groups?  Improvements – what is missing?
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 12 System of Linear Equations.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 14 Elimination Methods.
Review of Matrix Algebra
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
GG313 Lecture 12 Matrix Operations Sept 29, 2005.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Part 3 Chapter 8 Linear Algebraic Equations and Matrices PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Sundermeyer MAR 550 Spring Laboratory in Oceanography: Data and Methods MAR550, Spring 2013 Miles A. Sundermeyer Linear Algebra & Calculus Review.
ECON 1150 Matrix Operations Special Matrices
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
BIOL 582 Lecture Set 19 Matrices, Matrix calculations, Linear models using linear algebra.
Presentation on Matrices and some special matrices In partial fulfillment of the subject Vector calculus and linear algebra ( ) Submitted by: Agarwal.
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
Matrices & Determinants Chapter: 1 Matrices & Determinants.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
4.4 Identify and Inverse Matrices Algebra 2. Learning Target I can find and use inverse matrix.
Introduction to Matrices Douglas N. Greve
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Matrices and Determinants
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
2.5 – Determinants and Multiplicative Inverses of Matrices.
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Matrices, Vectors, Determinants.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Matrices Introduction.
MTH108 Business Math I Lecture 20.
Matrices and Vector Concepts
Introduction to Linear Algebra
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Chapter 7 Matrix Mathematics
Matrices and vector spaces
Linear Algebraic Equations and Matrices
7.3 Matrices.
DETERMINANT MATRIX YULVI ZAIKA.
CS485/685 Computer Vision Dr. George Bebis
Derivative of scalar forms
Lecture 11 Matrices and Linear Algebra with MATLAB
Dr Huw Owens Room B44 Sackville Street Building Telephone Number 65891
Laboratory in Oceanography: Data and Methods
Subject :- Applied Mathematics
Matrices and Determinants
Presentation transcript:

Introduction to Matrices Douglas N. Greve greve@nmr.mgh.harvard.edu

Why Matrices? Simplifies notation Simplifies concepts Grounding for general linear model (GLM) Simplifies implementation (eg, matlab, octave) Optimization

M M = M What is the matrix? 2 4 y p 3.3i -7.4 A matrix is a table of numbers, like a spread sheet Each entry is called an “element” There is a value in each element (no empty cells) Elements can be whole numbers, positive, negative, fractions, real, imaginary, symbolic variables 2 4 y p 3.3i -7.4 M M = M

Parts of Matrices Size/Dimensions Elements Groupings of Elements Diagonals Triangles Vectors

Matrix Size/Dimension Number of Rows Number of Columns Rows-by-Columns (RowsxCols) 2 4 y p 3.3i -7.4 2 4 y 1x3 2x3 2 p p 2x1 1x1

g = -1 g = [-1] Scalar versus Matrix Scalar is just a number or value No dimension or size g = -1 No brackets No dimension Not a 1x1 g = [-1] 1x1

M(row,col) M = M(i,j) Mij M(2,3) = -7.4 Matrix Element 2 4 y p 3.3i Indicated by its row and column number Scalar value M(row,col) 2 4 y p 3.3i -7.4 M = M(i,j) 2x3 Mij (1,1) (1,2) (1,3) M(2,3) = -7.4 (2,1) (2,2) (2,3)

} } Off-Diagonals Diagonals 2 4 6 1 3 5 7 8 9 2 4 6 1 3 5 7 8 9 Row = Column + Offset 7 8 9 3x3 2 4 6 1 3 5 Row = Column - 2 (Second Upper Diagonal) } 7 8 9 Row = Column - 1 (First Upper Diagonal) Row = Column + 0 (Main Diagonal) Row = Column - 1 (First Lower Diagonal) } Off-Diagonals Row = Column - 2 (Second Lower Diagonal)

Triangles 2 4 6 1 3 5 7 8 9 Upper Triangle 3x3 Lower Triangle

Vector 2 4 y 2 p Special Type of Matrix Number of Rows or Columns = 1 Often given lower-case variable names 2 4 y 1x3 Row Vector 2 p 2x1 Column Vector

A matrix consists of vectors 2 4 y p 3.3i -7.4 2x3 2 4 x p 3.3i -7.4 2 4 x p 3.3i -7.4 Two 1x3 row vectors Three 2x1 column vectors

Special Matrices Square Diagonal Identity Triangular Symmetric Toeplitz

Square Matrix Rows = Columns 2 4 p 1x1 p 3.3i 2x2

Diagonal Matrix 2 3 9 All off-diagonal elements = 0 M(i,j)=0 if i != j 3 9 3x3

I = Identity Matrix 1 1 1 Main diagonal has all 1s All off-diagonal elements = 0 Square Symbol “I” 1 I = 1 1 3x3 Any matrix multiplied by the identity is itself

Triangular Matrices 2 4 6 3 5 9 2 1 3 7 8 9 3x3 3x3 3 5 9 3x3 2 1 3 7 8 9 3x3 Upper Triangular – all values below the main diagonal are 0 Lower Triangular – all values above the main diagonal are 0

Symmetric Matrix 2 4 6 4 3 5 6 5 9 M(row,col) = M(col,row) Reflect across main diagonal Square 2 4 6 4 3 5 6 5 9 3x3

Toeplitz Matrix All elements on a diagonal are the same 2 4 6 2 4 1 2 4 1 2 7 1 2 3x3 7 1 3x2 A causal, time invariant linear system is represented by an upper triangular Toeplitz matrix

Ones Matrix 1 1 1 1 1 1 1 1 1 1 1 1 All elements are 1 Any number of rows or columns 1 1 1 1 1 1 1x3 1 1 1 2x3 1 1 1 2x1 1x1

Zeros Matrix All elements are 0 Any number of rows or columns 1x3 2x3 1x3 2x3 2x1 1x1

Matrix Operations Addition/Subtraction Vector Multiplication Matrix Multiplication Multiplication by scalar Transpose Trace Inverse Pseudo Inverse

Matrix Addition/Subtraction Element-by-Element Addition/Subtraction Size of two matrices must be the same C(row,col) = A(row,col) +/- B(row,col) C = A + B (your first matrix equation!) 2 4 6 7 9 A = B = 1 3 5 1 -2 3.3 2x3 2x3 2 11 15 C = 2 1 8.3 2x3

Vector Multiplication One Row Vector and One Column Vector Same length, Results in 1x1 Inner Product, Dot Product, “Scalar” Product Like a correlation “Multiply and accumulate” z = x.y= x*y = S xi*yi 2 4 6 x = 1x3 2 4 6 * * * 0 1 7 0+4+42 = 46 7 1 y = 3x1

Matrix Multiplication Series of vector multiplications Same “inner dimension” 4 2 8 2 4 6 A = B = 1 5 6 2 1 3 5 3 7 9 3 2x3 3x4 c11 c12 c13 c14 cij=Arowi*Bcolj C=A*B = c21 c22 c23 c24 2x4 Dimensions: (2x3) *(3x4)=(2x4)

Matrix Multiplication Series of vector multiplications Same “inner dimension” 4 2 8 2 4 6 A = B = 1 5 6 2 1 3 5 3 7 9 3 2x3 3x4 c11 c12 c13 c14 c11= 2 4 6 1 3 C=A*B = c21 c22 c23 c24 2x4 2*0+4*1+6*3= 22

Matrix Multiplication Series of vector multiplications Same “inner dimension” 4 2 8 2 4 6 A = B = 1 5 6 2 1 3 5 3 7 9 3 2x3 3x4 22 c12 c13 c14 c12= 2 4 6 4 5 7 C=A*B = c21 c22 c23 c24 2x4 2*4+4*5+6*7= 70

Matrix Multiplication Series of vector multiplications Same “inner dimension” 4 2 8 2 4 6 A = B = 1 5 6 2 1 3 5 3 7 9 3 2x3 3x4 22 70 c13 c14 c13= 2 4 6 2 6 9 C=A*B = c21 c22 c23 c24 2x4 2*2+4*6+6*9= 82

Matrix Multiplication Series of vector multiplications Same “inner dimension” 4 2 8 2 4 6 A = B = 1 5 6 2 1 3 5 3 7 9 3 2x3 3x4 22 70 82 c14 c14= 2 4 6 8 2 3 C=A*B = c21 c22 c23 c24 2x4 2*8+4*2+6*3= 42

Matrix Multiplication Series of vector multiplications Same “inner dimension” 4 2 8 2 4 6 A = B = 1 5 6 2 1 3 5 3 7 9 3 2x3 3x4 22 70 82 42 c21= 1 3 5 1 3 C=A*B = c21 c22 c23 c24 2x4 1*0+4*1+5*3= 18

Matrix Multiplication Series of vector multiplications Same “inner dimension” 4 2 8 2 4 6 A = B = 1 5 6 2 1 3 5 3 7 9 3 2x3 3x4 22 70 82 42 c22= 1 3 5 4 5 7 C=A*B = 18 c22 c23 c24 2x4 1*4+4*5+5*7= 54

Outer Product (Vector Multiplication) One Row Vector one Column Vector Same length (N) Result is NxN matrix Special case of Matrix Multiplication z = y*x 2 4 6 2 4 6 x = 1x3 1x3 7 1 7 1 y = 2 4 6 3x1 14 28 42 3x3 3x1

g = -1 gA = Multiplication with a scalar 2 4 6 1 3 5 -2 -4 -6 -1 -3 -5 Multiply each element by scalar value 2 4 6 g = -1 A = No brackets No dimension Not a 1x1 1 3 5 2x3 gA = -2 -4 -6 -1 -3 -5 2x3

Matrix Transpose 2 1 2 4 6 4 3 1 3 5 6 5 “Reflect” across diagonal B(row,col) = A(col,row) 2 1 2 4 6 4 3 A = B=AT =At =A’= 1 3 5 6 5 2x3 3x2 If A=A’, then A is symmetric (must be square)

Trace 2 4 6 1 3 5 7 8 9 Sum of diagonal Elements Trace = S M(i,i) M =

D = c11* c22 - c12* c21 Determinant c11 c12 c21 c22 Property of a square matrix Complicated in general Simple for a 2x2 c11 c12 D = c11* c22 - c12* c21 C = c21 c22 2x2

1 D D = c11* c22 - c12* c21 Matrix Inverse c11 c12 c22 -c12 c21 c22 C*A = I, then A=C-1 and C=A-1 Must be square Complicated in general Simple for a 2x2 c11 c12 c22 -c12 1 D C = C-1 = c21 c22 -c21 c11 2x2 2x2 D = c11* c22 - c12* c21

1 Invertibility D = 1.0*1.0 - 2.0*0.5 = 1-1 = 0 IMPORTANT!!! Not all matrices are invertible D=0 “Singular” D = 1.0*1.0 - 2.0*0.5 = 1-1 = 0 1.0 2x2 C = 2.0 0.5 1.0 2x2 C-1 = -2.0 -0.5 1

D = 1.0*1.0 - 2.0*0.5 = 1-1 = 0 Singularity and “Ill-Conditioned” 2x2 Column 2 = twice Column 1 Linear Dependence Ill-Conditioned: D is “close” to 0 Relates to efficiency of a GLM. D = 1.0*1.0 - 2.0*0.5 = 1-1 = 0

If X has more columns than rows X+ = (X’*X)-1*X’ Pseudo-Inverse Can be used for non-square Same rules on invertibility apply Least-mean-square (LMS) for over-determined system If X has more columns than rows X+ = (X’*X)-1*X’ Check: X+*X = (X’*X)-1*X’*X=I If X has more rows than columns X+ = X’ *(X*X’)-1* Check: X*X+ = X*X’ *(X*X’)-1=I

Matrix Applications

length(y) = x’*x = 1*1 + 1*1 + 1*1 = 3 Sum/Length/Average of a List y = 3x1 1 2 3 x = 3x1 1 x’ = [1 1 1]1x3 “Ones vector” sum(y) = x’*y = 1*1 + 1*2 + 1*3 = 6 1x3 3x1 length(y) = x’*x = 1*1 + 1*1 + 1*1 = 3 Average m = (x’*x)-1(x’*y) [Note: GLM] = (3-1)*6 = 6/3 = 2

length(y) = x’*x = 0*0+ 1*1 + 1*1 = 2 Selective Average y = 3x1 1 2 3 x = 3x1 1 x’ = [0 1 1]1x3 “Ones vector” sum(y) = x’*y = 0*1 + 1*2 + 1*3 = 5 1x3 3x1 length(y) = x’*x = 0*0+ 1*1 + 1*1 = 2 Average m = (x’*x)-1(x’*y) = (2-1)*5 = 5/2 = 2.5

y = y’ = [1 2 3]1x3 SS =S(y2) SS(y) = y’*y = 1*1 + 2*2 + 3*3 = 14 Sum of the Squares of a List y = 3x1 1 2 3 y’ = [1 2 3]1x3 SS =S(y2) SS(y) = y’*y = 1*1 + 2*2 + 3*3 = 14

DOF = x’*x - 1 = #RowsX - #ColsX Variance of a List y = 3x1 1 2 3 x = 3x1 1 s2 =S(yi-m)2 (N-1) Residual e = y – m*x s2 = sqrt(e’*e)/DOF DOF = x’*x - 1 = #RowsX - #ColsX

Orthogonality z = x*y = S xi*yi 1 -1 5 1 * 2 + -1 * 3 + 5 * .2 = [0] 2 Vector product is 0 Property of two vectors Uncorrelated/Independence - statistical property z = x*y = S xi*yi 1 -1 5 x = 1x3 1 * 2 + -1 * 3 + 5 * .2 = [0] 2 .2 3 y = 1x1 3x1

Fitting a Line Thickness x1 x2 y2 y1 Dependent Variable, Measurement Subject 1 Subject 2 HRF Amplitude IQ, Height, Weight Age Independent Variable

Linear Model System of Linear Equations y1 = 1*b + x1*m Intercept: b Slope: m Age x1 x2 y2 y1 System of Linear Equations y1 = 1*b + x1*m y2 = 1*b + x2*m Two Eqns, Two Unknowns Solve 1st Eq for b Substitute into 2nd Eq Solve 2nd equation for m Substitute m back into Eq for b “Intercept” = “Offset” m= (y2-y1) (x2-x1) b= (x2*y1-x1*y2) (x2-x1)

Matrix Formulation/GLM Intercept: b Slope: m Age x1 x2 y2 y1 y1 = 1*b + x1*m y2 = 1*b + x2*m y1 y2 1 x1 1 x2 b m = * 2x1 2x1 2x2 y = X*b - parameter vector X - design matrix

Matrix Formulation/GLM Intercept: b Slope: m Age x1 x2 y2 y1 y1 y2 1 x1 1 x2 b m = * y = X*b # of Cols of X = # Parameters Each parameter has a column Each column means something Parameter meaning from column 1 X = x1 x2

GLM Solution y = X*b b=X-1*y = x2-x1 Multiply both sides by X-1 Intercept: b Slope: m Age x1 x2 y2 y1 y = X*b Multiply both sides by X-1 b=X-1*y 1 X = x1 x2 X-1 = -x1 -1 D = x2-x1 Non-invertible if x1=x2 Ill-conditioned if x1 near x2 -- Sensitive to noise y1 y2 1 x1 1 x2 b m = *

Non-invertibility x1=x2=0.1 Want to find b, but … y = X*b m = * y = X*b 1 b = 2 1.2 y = 1 X = .1 1.2 y = Intercept: b Slope: m Age x1 x2 y2 y1 .2 b = 10 b is not unique!

More than Two Points y = X*b b=X+y=(X’X)-1X’y y1 = 1*b + x1*m Intercept: b Slope: m y1 = 1*b + x1*m y2 = 1*b + x2*m y3 = 1*b + x3*m y = X*b Multiply both sides by X+ b=X+y=(X’X)-1X’y Three Eqns, Two Unknowns Over-determined Still two columns in X, same meaning X is not square (use pseudo-inv X+; LMS) Same invertibility concerns Measure noise (DOF != 0)

Matrix Decomposition/Factorization Break up a single matrix into several matrices that are multiplied together: Eigen System Singular Value Decomposition (SVD) = Principle Component Analysis (PCA) Cholesky Decomposition - “square root” of a matrix Independent Component Analysis (ICA)

Q – matrix of eigenvectors L – diagonal matrix of eigenvalues Eigen Decomposition A = Q*L*Q-1, A*v = l*v A – square matrix v – eigenvector - eigenvalue Q – matrix of eigenvectors L – diagonal matrix of eigenvalues