Introduction to Linear Algebra Mark Goldman Emily Mackevicius.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Y.M. Hu, Assistant Professor, Department of Applied Physics, National University of Kaohsiung Matrix – Basic Definitions Chapter 3 Systems of Differential.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
3D Geometry for Computer Graphics
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
資訊科學數學11 : Linear Equation and Matrices
Matrices MSU CSE 260.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Stats & Linear Models.
Week 4 - Monday.  What did we talk about last time?  Vectors.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
SVD(Singular Value Decomposition) and Its Applications
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Chapter 2 – Linear Transformations
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
Day 1 Eigenvalues and Eigenvectors
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
BIOL 582 Supplemental Material Matrices, Matrix calculations, GLM using matrix algebra.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Chapter 10 Real Inner Products and Least-Square
What is the determinant of What is the determinant of
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
MTH108 Business Math I Lecture 20.
Introduction to Linear Algebra
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Matrices and vector spaces
1.3 Vector Equations.
Chapter 3 Linear Algebra
Presentation transcript:

Introduction to Linear Algebra Mark Goldman Emily Mackevicius

Outline 1. Matrix arithmetic 2. Matrix properties 3. Eigenvectors & eigenvalues -BREAK- 4. Examples (on blackboard) 5. Recap, additional matrix properties, SVD

Part 1: Matrix Arithmetic (w/applications to neural networks)

Matrix addition

Scalar times vector

Product of 2 Vectors Element-by-element Inner product Outer product Three ways to multiply

Element-by-element product (Hadamard product) Element-wise multiplication (.* in MATLAB)

Dot product (inner product)

Multiplication: Dot product (inner product)

1 X NN X 1 1 X 1 MATLAB: ‘inner matrix dimensions must agree’ Outer dimensions give size of resulting matrix

Dot product geometric intuition: “Overlap” of 2 vectors

Example: linear feed-forward network Input neurons’ Firing rates r1r1 r2r2 rnrn riri

Example: linear feed-forward network Input neurons’ Firing rates Synaptic weights r1r1 r2r2 rnrn riri

Example: linear feed-forward network Input neurons’ Firing rates Output neuron’s firing rate Synaptic weights r1r1 r2r2 rnrn riri

Example: linear feed-forward network Insight: for a given input (L2) magnitude, the response is maximized when the input is parallel to the weight vector Receptive fields also can be thought of this way Input neurons’ Firing rates Output neuron’s firing rate Synaptic weights r1r1 r2r2 rnrn riri

Multiplication: Outer product N X 11 X MN X M

Multiplication: Outer product

Note: each column or each row is a multiple of the others

Matrix times vector

M X 1M X NN X 1

Matrix times vector: inner product interpretation Rule: the i th element of y is the dot product of the i th row of W with x

Matrix times vector: inner product interpretation Rule: the i th element of y is the dot product of the i th row of W with x

Matrix times vector: inner product interpretation Rule: the i th element of y is the dot product of the i th row of W with x

Matrix times vector: inner product interpretation Rule: the i th element of y is the dot product of the i th row of W with x

Matrix times vector: inner product interpretation Rule: the i th element of y is the dot product of the i th row of W with x

Matrix times vector: outer product interpretation The product is a weighted sum of the columns of W, weighted by the entries of x

Matrix times vector: outer product interpretation The product is a weighted sum of the columns of W, weighted by the entries of x

Matrix times vector: outer product interpretation The product is a weighted sum of the columns of W, weighted by the entries of x

Matrix times vector: outer product interpretation The product is a weighted sum of the columns of W, weighted by the entries of x

Example of the outer product method

(3,1) (0,2)

Example of the outer product method (3,1) (0,4)

Example of the outer product method (3,5) Note: different combinations of the columns of M can give you any vector in the plane (we say the columns of M “span” the plane)

Rank of a Matrix Are there special matrices whose columns don’t span the full plane?

Rank of a Matrix Are there special matrices whose columns don’t span the full plane? (1,2) (-2, -4) You can only get vectors along the (1,2) direction (i.e. outputs live in 1 dimension, so we call the matrix rank 1)

Example: 2-layer linear network W ij is the connection strength (weight) onto neuron y i from neuron x j.

Example: 2-layer linear network: inner product point of view What is the response of cell y i of the second layer? The response is the dot product of the i th row of W with the vector x

Example: 2-layer linear network: outer product point of view How does cell x j contribute to the pattern of firing of layer 2? Contribution of x j to network output 1 st column of W

Product of 2 Matrices MATLAB: ‘inner matrix dimensions must agree’ Note: Matrix multiplication doesn’t (generally) commute, AB  BA N X PP X MN X M

Matrix times Matrix: by inner products C ij is the inner product of the i th row with the j th column

Matrix times Matrix: by inner products C ij is the inner product of the i th row with the j th column

Matrix times Matrix: by inner products C ij is the inner product of the i th row with the j th column

Matrix times Matrix: by inner products C ij is the inner product of the i th row of A with the j th column of B

Matrix times Matrix: by outer products

C is a sum of outer products of the columns of A with the rows of B

Part 2: Matrix Properties (A few) special matrices Matrix transformations & the determinant Matrices & systems of algebraic equations

Special matrices: diagonal matrix This acts like scalar multiplication

Special matrices: identity matrix for all

Special matrices: inverse matrix Does the inverse always exist?

How does a matrix transform a square? (1,0) (0,1)

How does a matrix transform a square? (1,0) (0,1)

How does a matrix transform a square? (3,1) (0,2) (1,0) (0,1)

Geometric definition of the determinant: How does a matrix transform a square? (1,0) (0,1)

Example: solve the algebraic equation

Example of an underdetermined system Some non-zero vectors are sent to 0

Example of an underdetermined system

Some non-zero x are sent to 0 (the set of all x with Mx=0 are called the “nullspace” of M) This is because det(M)=0 so M is not invertible. (If det(M) isn’t 0, the only solution is x = 0) 

Part 3: Eigenvectors & eigenvalues

What do matrices do to vectors? (3,1) (0,2) (2,1)

Recall (3,5) (2,1)

What do matrices do to vectors? (3,5) The new vector is: 1) rotated 2) scaled (2,1)

Are there any special vectors that only get scaled?

Try (1,1)

Are there any special vectors that only get scaled? = (1,1)

Are there any special vectors that only get scaled? = (3,3) = (1,1)

Are there any special vectors that only get scaled? For this special vector, multiplying by M is like multiplying by a scalar. (1,1) is called an eigenvector of M 3 (the scaling factor) is called the eigenvalue associated with this eigenvector = (1,1) = (3,3)

Are there any other eigenvectors?

Yes! The easiest way to find is with MATLAB’s eig command. Exercise: verify that (-1.5, 1) is also an eigenvector of M.

Are there any other eigenvectors? Yes! The easiest way to find is with MATLAB’s eig command. Exercise: verify that (-1.5, 1) is also an eigenvector of M with eigenvalue 2 = -2 Note: eigenvectors are only defined up to a scale factor. – Conventions are either make e’s unit vectors, or make one of the elements 1

Step back: Eigenvectors obey this equation

This is called the characteristic equation for In general, for an N x N matrix, there are N eigenvectors

BREAK

Part 4: Examples (on blackboard) Principal Components Analysis (PCA) Single, linear differential equation Coupled differential equations

Part 5: Recap & Additional useful stuff Matrix diagonalization recap: transforming between original & eigenvector coordinates More special matrices & matrix properties Singular Value Decomposition (SVD)

Coupled differential equations Calculate the eigenvectors and eigenvalues. – Eigenvalues have typical form: The corresponding eigenvector component has dynamics:

Step 1: Find the eigenvalues and eigenvectors of M. Step 2: Decompose x into its eigenvector components Step 3: Stretch/scale each eigenvalue component Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx eig(M) in MATLAB

Step 1: Find the eigenvalues and eigenvectors of M. Step 2: Decompose x into its eigenvector components Step 3: Stretch/scale each eigenvalue component Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx

Step 1: Find the eigenvalues and eigenvectors of M. Step 2: Decompose x into its eigenvector components Step 3: Stretch/scale each eigenvalue component Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx

Step 1: Find the eigenvalues and eigenvectors of M. Step 2: Decompose x into its eigenvector components Step 3: Stretch/scale each eigenvalue component Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx

Step 1: Find the eigenvalues and eigenvectors of M. Step 2: Decompose x into its eigenvector components Step 3: Stretch/scale each eigenvalue component Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx

Step 1: Find the eigenvalues and eigenvectors of M. Step 2: Decompose x into its eigenvector components Step 3: Stretch/scale each eigenvalue component Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx

Where (step 1): MATLAB: Putting it all together…

Step 2: Transform into eigencoordinates Step 3: Scale by i along the i th eigencoordinate Step 4: Transform back to original coordinate system Putting it all together…

Left eigenvectors -The rows of E inverse are called the left eigenvectors because they satisfy E -1 M =  E -1. -Together with the eigenvalues, they determine how x is decomposed into each of its eigenvector components.

Matrix in eigencoordinate system Original Matrix Where: Putting it all together…

Note: M and Lambda look very different. Q: Are there any properties that are preserved between them? A: Yes, 2 very important ones: Matrix in eigencoordinate system Original Matrix Trace and Determinant 2. 1.

Special Matrices: Normal matrix Normal matrix: all eigenvectors are orthogonal  Can transform to eigencoordinates (“change basis”) with a simple rotation of the coordinate axes  E is a rotation (unitary or orthogonal) matrix, defined by: E Picture:

Special Matrices: Normal matrix Normal matrix: all eigenvectors are orthogonal  Can transform to eigencoordinates (“change basis”) with a simple rotation of the coordinate axes  E is a rotation (unitary or orthogonal) matrix, defined by: where if:then:

Special Matrices: Normal matrix Eigenvector decomposition in this case: Left and right eigenvectors are identical!

Symmetric Matrix: Special Matrices e.g. Covariance matrices, Hopfield network Properties: – Eigenvalues are real – Eigenvectors are orthogonal (i.e. it’s a normal matrix)

SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) t = 1 t = 2 t = T n = 1 n = 2 n = N

SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) n = 1 n = 2 n = N t = 1 t = 2 t = T

SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) Rows of V T are eigenvectors of M T M Columns of U are eigenvectors of MM T Note: the eigenvalues are the same for M T M and MM T

SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) Rows of V T are eigenvectors of M T M Columns of U are eigenvectors of MM T Thus, SVD pairs “spatial” patterns with associated “temporal” profiles through the outer product

The End