CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.

Slides:



Advertisements
Similar presentations
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Advertisements

Applied Informatics Štefan BEREŽNÝ
Matrices A matrix is a rectangular array of quantities (numbers, expressions or function), arranged in m rows and n columns x 3y.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
3_3 An Useful Overview of Matrix Algebra
Chapter 4.1 Mathematical Concepts
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry Trigonometric functions Defined using right triangle  x y h.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Chapter 2 Matrices Definition of a matrix.
Ch 7.2: Review of Matrices For theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Part 3 Chapter 8 Linear Algebraic Equations and Matrices PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Intro to Matrices Don’t be scared….
CE 311 K - Introduction to Computer Methods Daene C. McKinney
Stats & Linear Models.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra & Matrices MfD 2004 María Asunción Fernández Seara January 21 st, 2004 “The beginnings of matrices and determinants.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 2.1: Recap on Linear Algebra Daniel Cremers Technische Universität.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Chapter 4.1 Mathematical Concepts
 Row and Reduced Row Echelon  Elementary Matrices.
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 : “shiv rpi” Linear Algebra A gentle introduction Linear Algebra has become as basic and as applicable.
A rectangular array of numbers (we will concentrate on real numbers). A nxm matrix has ‘n’ rows and ‘m’ columns What is a matrix? First column First row.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
ES 240: Scientific and Engineering Computation. Chapter 8 Chapter 8: Linear Algebraic Equations and Matrices Uchechukwu Ofoegbu Temple University.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Matrices and Determinants
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Computer Graphics Mathematical Fundamentals Lecture 10 Taqdees A. Siddiqi
Unsupervised Learning II Feature Extraction
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry "Old Henry And His Old Aunt" Defined using right triangle  x y h.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
MTH108 Business Math I Lecture 20.
2. Matrix Methods
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Matrices and vector spaces
Linear Algebraic Equations and Matrices
L5 matrix.
Systems of First Order Linear Equations
CS485/685 Computer Vision Dr. George Bebis
Chapter 3 Linear Algebra
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Presentation transcript:

CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware

Image RegistrationLecture 2 2 Lecture Overview  Vectors  Matrices  Basics  Orthogonal matrices  Singular Value Decomposition (SVD)  Vectors  Matrices  Basics  Orthogonal matrices  Singular Value Decomposition (SVD)

Image RegistrationLecture 2 3 Preliminary Comments  Some of this should be review; all of it might be review  This is really only background, and not a main focus of the course  All of the material is covered in standard linear algebra texts.  I use Gilbert Strang’s Linear Algebra and Its Applications  Some of this should be review; all of it might be review  This is really only background, and not a main focus of the course  All of the material is covered in standard linear algebra texts.  I use Gilbert Strang’s Linear Algebra and Its Applications

Image RegistrationLecture 2 4 Vectors: Definition  Formally, a vector is an element of a vector space  Informally (and somewhat incorrectly), we will use vectors to represent both point locations and directions  Algebraically, we write  Note that we will usually treat vectors column vectors and use the transpose notation to make the writing more compact  Formally, a vector is an element of a vector space  Informally (and somewhat incorrectly), we will use vectors to represent both point locations and directions  Algebraically, we write  Note that we will usually treat vectors column vectors and use the transpose notation to make the writing more compact

Image RegistrationLecture 2 5 Vectors: Example x z y (-4,6,5) (0,0,-1)

Image RegistrationLecture 2 6 Vectors: Addition  Added component-wise  Example:  Added component-wise  Example: x z y p p+q q Geometric view

Image RegistrationLecture 2 7 Vectors: Scalar Multiplication  Simplest form of multiplication involving vectors  In particular:  Example:  Simplest form of multiplication involving vectors  In particular:  Example: p cpcp

Image RegistrationLecture 2 8 Vectors: Lengths, Magnitudes, Distances  The length or magnitude of a vector is  The distance between two vectors is  The length or magnitude of a vector is  The distance between two vectors is

Image RegistrationLecture 2 9 Vectors: Dot (Scalar/Inner) Product  Second means of multiplication involving vectors  In particular,  We’ll see a different notation for writing the scalar product using matrix multiplication soon  Note that  Second means of multiplication involving vectors  In particular,  We’ll see a different notation for writing the scalar product using matrix multiplication soon  Note that

Image RegistrationLecture 2 10 Unit Vectors  A unit (direction) vector is a vector whose magnitude is 1:  Typically, we will use a “hat” to denote a unit vector, e.g.:  A unit (direction) vector is a vector whose magnitude is 1:  Typically, we will use a “hat” to denote a unit vector, e.g.:

Image RegistrationLecture 2 11 Angle Between Vectors  We can compute the angle between two vectors using the scalar product:  Two non-zero vectors are orthogonal if and only if  We can compute the angle between two vectors using the scalar product:  Two non-zero vectors are orthogonal if and only if x z y p q 

Image RegistrationLecture 2 12 Cross (Outer) Product of Vectors  Given two 3-vectors, p and q, the cross product is a vector perpendicular to both  In component form,  Finally,  Given two 3-vectors, p and q, the cross product is a vector perpendicular to both  In component form,  Finally,

Image RegistrationLecture 2 13 Looking Ahead A Bit to Transformations  Be aware that lengths and angles are preserved by only very special transformations  Therefore, in general  Unit vectors will no longer be unit vectors after applying a transformation  Orthogonal vectors will no longer be orthogonal after applying a transformation  Be aware that lengths and angles are preserved by only very special transformations  Therefore, in general  Unit vectors will no longer be unit vectors after applying a transformation  Orthogonal vectors will no longer be orthogonal after applying a transformation

Image RegistrationLecture 2 14 Matrices - Definition  Matrices are rectangular arrays of numbers, with each number subscripted by two indices:  A short-hand notation for this is  Matrices are rectangular arrays of numbers, with each number subscripted by two indices:  A short-hand notation for this is m rows n columns

Image RegistrationLecture 2 15 Special Matrices: The Identity  The identity matrix, denoted I, I n or I nxn, is a square matrix with n rows and columns having 1’s on the main diagonal and 0’s everywhere else:

Image RegistrationLecture 2 16 Diagonal Matrices  A diagonal matrix is a square matrix that has 0’s everywhere except on the main diagonal.  For example:  A diagonal matrix is a square matrix that has 0’s everywhere except on the main diagonal.  For example: Notational short-hand

Image RegistrationLecture 2 17 Matrix Transpose and Symmetry  The transpose of a matrix is one where the rows and columns are reversed:  If A = A T then the matrix is symmetric.  Only square matrices (m=n) are symmetric  The transpose of a matrix is one where the rows and columns are reversed:  If A = A T then the matrix is symmetric.  Only square matrices (m=n) are symmetric

Image RegistrationLecture 2 18 Examples  This matrix is not symmetric  This matrix is symmetric  This matrix is not symmetric  This matrix is symmetric

Image RegistrationLecture 2 19 Matrix Addition  Two matrices can be added if and only if (iff) they have the same number of rows and the same number of columns.  Matrices are added component-wise:  Example:  Two matrices can be added if and only if (iff) they have the same number of rows and the same number of columns.  Matrices are added component-wise:  Example:

Image RegistrationLecture 2 20 Matrix Scalar Multiplication  Any matrix can be multiplied by a scalar

Image RegistrationLecture 2 21 Matrix Multiplication  The product of an mxn matrix and a nxp matrix is a mxp matrix:  Entry i,j of the result matrix is the dot-product of row i of A and column j of B  Example  The product of an mxn matrix and a nxp matrix is a mxp matrix:  Entry i,j of the result matrix is the dot-product of row i of A and column j of B  Example

Image RegistrationLecture 2 22 Vectors as Matrices  Vectors, which we usually write as column vectors, can be thought of as nx1 matrices  The transpose of a vector is a 1xn matrix - a row vector.  These allow us to write the scalar product as a matrix multiplication:  For example,  Vectors, which we usually write as column vectors, can be thought of as nx1 matrices  The transpose of a vector is a 1xn matrix - a row vector.  These allow us to write the scalar product as a matrix multiplication:  For example,

Image RegistrationLecture 2 23 Notation  We will tend to write matrices using boldface capital letters  We will tend to write vectors as boldface small letters  We will tend to write matrices using boldface capital letters  We will tend to write vectors as boldface small letters

Image RegistrationLecture 2 24 Square Matrices  Much of the remaining discussion will focus only on square matrices:  Trace  Determinant  Inverse  Eigenvalues  Orthogonal / orthonormal matrices  When we discuss the singular value decomposition we will be back to non-square matrices  Much of the remaining discussion will focus only on square matrices:  Trace  Determinant  Inverse  Eigenvalues  Orthogonal / orthonormal matrices  When we discuss the singular value decomposition we will be back to non-square matrices

Image RegistrationLecture 2 25 Trace of a Matrix  Sum of the terms on the main diagonal of a square matrix:  The trace equals the sum of the eigenvalues of the matrix.  Sum of the terms on the main diagonal of a square matrix:  The trace equals the sum of the eigenvalues of the matrix.

Image RegistrationLecture 2 26 Determinant  Notation:  Recursive definition:  When n=1,  When n=2  Notation:  Recursive definition:  When n=1,  When n=2

Image RegistrationLecture 2 27 Determinant (continued)  For n>2, choose any row i of A, and define M i,j be the (n-1)x(n-1) matrix formed by deleting row i and column j of A, then  We get the same formula by choosing any column j of A and summing over the rows.  For n>2, choose any row i of A, and define M i,j be the (n-1)x(n-1) matrix formed by deleting row i and column j of A, then  We get the same formula by choosing any column j of A and summing over the rows.

Image RegistrationLecture 2 28 Some Properties of the Determinant  If any two rows or any two columns are equal, the determinant is 0  Interchanging two rows or interchanging two columns reverses the sign of the determinant  The determinant of A equals the product of the eigenvalues of A  For square matrices  If any two rows or any two columns are equal, the determinant is 0  Interchanging two rows or interchanging two columns reverses the sign of the determinant  The determinant of A equals the product of the eigenvalues of A  For square matrices

Image RegistrationLecture 2 29 Matrix Inverse  The inverse of a square matrix A is the unique matrix A -1 such that  Matrices that do not have an inverse are said to be non-invertible or singular  A matrix is invertible if and only if its determinant is non-zero  We will not worry about the mechanism of calculating inverses, except using the singular value decomposition  The inverse of a square matrix A is the unique matrix A -1 such that  Matrices that do not have an inverse are said to be non-invertible or singular  A matrix is invertible if and only if its determinant is non-zero  We will not worry about the mechanism of calculating inverses, except using the singular value decomposition

Image RegistrationLecture 2 30 Eigenvalues and Eigenvectors  A scalar  and a vector v are, respectively, an eigenvalue and an associated (unit) eigenvector of square matrix A if  For example, if we think of a A as a transformation and if  then Av=v implies v is a “fixed-point” of the transformation.  Eigenvalues are found by solving the equation  Once eigenvalues are known, eigenvectors are found,, by finding the nullspace (we will not discuss this) of  A scalar  and a vector v are, respectively, an eigenvalue and an associated (unit) eigenvector of square matrix A if  For example, if we think of a A as a transformation and if  then Av=v implies v is a “fixed-point” of the transformation.  Eigenvalues are found by solving the equation  Once eigenvalues are known, eigenvectors are found,, by finding the nullspace (we will not discuss this) of

Image RegistrationLecture 2 31 Eigenvalues of Symmetric Matrices  They are all real (as opposed to imaginary), which can be seen by studying the following (and remembering properties of vector magnitudes)  We can also show that eigenvectors associated with distinct eigenvalues of a symmetric matrix are orthogonal  We can therefore write a symmetric matrix (I don’t expect you to derive this) as  They are all real (as opposed to imaginary), which can be seen by studying the following (and remembering properties of vector magnitudes)  We can also show that eigenvectors associated with distinct eigenvalues of a symmetric matrix are orthogonal  We can therefore write a symmetric matrix (I don’t expect you to derive this) as

Image RegistrationLecture 2 32 Orthonormal Matrices  A square matrix is orthonormal (sometimes called orthogonal) iff  In other word A T is the right inverse.  Based on properties of inverses this immediately implies  This means for vectors formed by any two rows or any two columns  A square matrix is orthonormal (sometimes called orthogonal) iff  In other word A T is the right inverse.  Based on properties of inverses this immediately implies  This means for vectors formed by any two rows or any two columns Kronecker delta, which is 1 if i=j and 0 otherwise

Image RegistrationLecture 2 33 Orthonormal Matrices - Properties  The determinant of an orthonormal matrix is either 1 or -1 because  Multiplying a vector by an orthonormal matrix does not change the vector’s length:  An orthonormal matrix whose determinant is 1 (-1) is called a rotation (reflection).  Of course, as discussed on the previous slide  The determinant of an orthonormal matrix is either 1 or -1 because  Multiplying a vector by an orthonormal matrix does not change the vector’s length:  An orthonormal matrix whose determinant is 1 (-1) is called a rotation (reflection).  Of course, as discussed on the previous slide

Image RegistrationLecture 2 34 Singular Value Decomposition (SVD)  Consider an mxn matrix, A, and assume m≥n.  A can be “decomposed” into the product of 3 matrices:  Where:  U is mxn with orthonormal columns  W is a nxn diagonal matrix of “singular values”, and  V is nxn orthonormal matrix  If m=n then U is an orthonormal matrix  Consider an mxn matrix, A, and assume m≥n.  A can be “decomposed” into the product of 3 matrices:  Where:  U is mxn with orthonormal columns  W is a nxn diagonal matrix of “singular values”, and  V is nxn orthonormal matrix  If m=n then U is an orthonormal matrix

Image RegistrationLecture 2 35 Properties of the Singular Values  with  and  the number of non-zero singular values is equal to the rank of A  with  and  the number of non-zero singular values is equal to the rank of A

Image RegistrationLecture 2 36 SVD and Matrix Inversion  For a non-singular, square matrix, with  The inverse of A is  You should confirm this for yourself!  Note, however, this isn’t always the best way to compute the inverse  For a non-singular, square matrix, with  The inverse of A is  You should confirm this for yourself!  Note, however, this isn’t always the best way to compute the inverse

Image RegistrationLecture 2 37 SVD and Solving Linear Systems  Many times problems reduce to finding the vector x that minimizes  Taking the derivative (I don’t necessarily expect that you can do this, but it isn’t hard) with respect to x, setting the result to 0 and solving implies  Computing the SVD of A (assuming it is full- rank) results in  Many times problems reduce to finding the vector x that minimizes  Taking the derivative (I don’t necessarily expect that you can do this, but it isn’t hard) with respect to x, setting the result to 0 and solving implies  Computing the SVD of A (assuming it is full- rank) results in

Image RegistrationLecture 2 38 Summary  Vectors  Definition, addition, dot (scalar / inner) product, length, etc.  Matrices  Definition, addition, multiplication  Square matrices: trace, determinant, inverse, eigenvalues  Orthonormal matrices  SVD  Vectors  Definition, addition, dot (scalar / inner) product, length, etc.  Matrices  Definition, addition, multiplication  Square matrices: trace, determinant, inverse, eigenvalues  Orthonormal matrices  SVD

Image RegistrationLecture 2 39 Looking Ahead to Lecture 3  Images and image coordinate systems  Transformations  Similarity  Affine  Projective  Images and image coordinate systems  Transformations  Similarity  Affine  Projective

Image RegistrationLecture 2 40 Practice Problems  A handout will be given with Lecture 3.