Computer Graphics Recitation 5.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

3D Geometry for Computer Graphics
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 2. 2 The plan today Learn about rotations in 2D and 3D. Representing rotations by quaternions.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Computer Graphics Recitation 6. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Orthogonality and Least Squares
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Matrices CS485/685 Computer Vision Dr. George Bebis.
Stats & Linear Models.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
SVD(Singular Value Decomposition) and Its Applications
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Graphics CSE 581 – Interactive Computer Graphics Mathematics for Computer Graphics CSE 581 – Roger Crawfis (slides developed from Korea University slides)
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Modern Navigation Thomas Herring
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 : “shiv rpi” Linear Algebra A gentle introduction Linear Algebra has become as basic and as applicable.
AN ORTHOGONAL PROJECTION
Linear algebra: matrix Eigen-value Problems
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Signal Processing and Representation Theory Lecture 2.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Chapter 5 Eigenvalues and Eigenvectors
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and Vectors Review Objective
Lecture 03: Linear Algebra
CS485/685 Computer Vision Dr. George Bebis
Chapter 3 Linear Algebra
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Symmetric Matrices and Quadratic Forms
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Computer Graphics Recitation 5

Motivation – Shape Matching What is the best transformation that aligns the dinosaur with the dog?

Motivation – Shape Matching Regard the shapes as sets of points and try to “match” these sets using a linear transformation The above is not a good matching….

Motivation – Shape Matching Regard the shapes as sets of points and try to “match” these sets using a linear transformation To find the best transformation we need to know SVD…

Finding underlying dimensionality x Given a set of points, find the best line that approximates it – reduce the dimension of the data set…

Finding underlying dimensionality x’ x Given a set of points, find the best line that approximates it – reduce the dimension of the data set…

Finding underlying dimensionality x’ x When we learn PCA (Principal Component Analysis), we’ll know how to find these axes that minimize the sum of distances2

PCA and SVD PCA and SVD are important tools not only in graphics but also in statistics, computer vision and more. To learn about them, we first need to get familiar with eigenvalue decomposition. So, today we’ll start with linear algebra basics reminder.

The plan today Basic linear algebra review. Eigenvalues and eigenvectors.

Vector space Informal definition: V   (a non-empty set of vectors) v, w  V  v + w  V (closed under addition) v  V,  is scalar  v  V (closed under multiplication by scalar) Formal definition includes axioms about associativity and distributivity of the + and  operators.

Subspace - example Let l be a 2D line though the origin L = {p – O | p  l} is a linear subspace of R2 y O x

Subspace - example Let  be a plane through the origin in 3D V = {p – O | p  } is a linear subspace of R3 z O y x

Linear independence The vectors {v1, v2, …, vk} are a linearly independent set if: 1v1 + 2v2 + … + kvk = 0  i = 0  i It means that none of the vectors can be obtained as a linear combination of the others.

Linear independence - example Parallel vectors are always dependent: Orthogonal vectors are always independent. v = 2.4 w  v + (2.4)w = 0 v w

V = {1v1 + 2v2 + … + nvn | i is scalar} Basis {v1, v2, …, vn} are linearly independent {v1, v2, …, vn} span the whole vector space V: V = {1v1 + 2v2 + … + nvn | i is scalar} Any vector in V is a unique linear combination of the basis. The number of basis vectors is called the dimension of V.

Basis - example The standard basis of R3 – three unit orthogonal vectors x, y, z: (sometimes called i, j, k or e1, e2, e3) z y x

Matrix representation Let {v1, v2, …, vn} be a basis of V Every vV has a unique representation v = 1v1 + 2v2 + … + nvn Denote v by the column-vector: The basis vectors are therefore denoted:

Linear operators A : V  W is called linear operator if: A(v + w) = A(v) + A(w) A(v) =  A(v) In particular, A(0) = 0 Linear operators we know: Scaling Rotation, reflection Translation is not linear – moves the origin

Linear operators - illustration Rotation is a linear operator: R(v+w) v+w w w v v

Linear operators - illustration Rotation is a linear operator: R(v+w) v+w R(w) w w v v

Linear operators - illustration Rotation is a linear operator: R(v+w) v+w R(v) R(w) w w v v

Linear operators - illustration Rotation is a linear operator: R(v+w) R(v)+R(w) v+w R(v) R(w) w w v v R(v+w) = R(v) + R(w)

Matrix representation of linear operators Look at A(v1),…, A(vn) where {v1, v2, …, vn} is a basis. For all other vectors: v = 1v1 + 2v2 + … + nvn A(v) = 1A(v1) + 2A(v2) + … + nA(vn) So, knowing what A does to the basis is enough The matrix representing A is:

Matrix representation of linear operators

Matrix operations Addition, subtraction, scalar multiplication – simple… Matrix by column vector:

Matrix operations Transposition: make the rows columns (AB)T = BTAT

Matrix operations Inner product can in matrix form:

Matrix properties Matrix A (nn) is non-singular if B, AB = BA = I B = A-1 is called the inverse of A A is non-singular  det A  0 If A is non-singular then the equation Ax=b has one unique solution for each b. A is non-singular  the rows of A are linearly independent set of vectors.

Vector product v  w is a new vector: In matrix form in 3D: ||v  w|| = ||v|| ||w|| |sin(v,w)| v  w  v, v  w  w v, w, v  w is a right-hand system In matrix form in 3D: vw v w

Orthogonal matrices Matrix A (nn) is orthogonal if A-1 = AT Follows: AAT = ATA = I The rows of A are orthonormal vectors! Proof: v1 I = ATA = v2 v1 v2 vn = viTvj = ij vn  <vi, vi> = 1  ||vi|| = 1; <vi, vj> = 0

Orthogonal operators A is orthogonal matrix  A represents a linear operator that preserves inner product (i.e., preserves lengths and angles): Therefore, ||Av|| = ||v|| and (Av,Aw) = (v,w)

Orthogonal operators - example Rotation by  around the z-axis in R3 : In fact, any orthogonal 33 matrix represents a rotation around some axis and/or a reflection detA = +1 rotation only detA = 1 with reflection

Eigenvectors and eigenvalues Let A be a square nn matrix v is eigenvector of A if: Av = v ( is a scalar) v  0 The scalar  is called eigenvalue Av = v  A(v) = (v)  v is also eigenvector Av = v, Aw = w  A(v+w) = (v+w) Therefore, eigenvectors of the same  form a linear subspace.

Eigenvectors and eigenvalues An eigenvector spans an axis that is invariant to A – it remains the same under the transformation. Example – the axis of rotation: Eigenvector of the rotation transformation O

Finding eigenvalues For which  is there a non-zero solution to Ax = x ? Ax = x  Ax – x = 0  Ax – Ix = 0  (A- I)x = 0 So, non trivial solution exists  det(A – I) = 0 A() = det(A – I) is a polynomial of degree n. It is called the characteristic polynomial of A. The roots of A are the eigenvalues of A. Therefore, there are always at least complex eigenvalues. If n is odd, there is at least one real eigenvalue.

Example of computing A Cannot be factorized over R Over C:

Computing eigenvectors Solve the equation (A – I)x = 0 We’ll get a subspace of solutions.

Spectra and diagonalization The set of all the eigenvalues of A is called the spectrum of A. A is diagonalizable if A has n independent eigenvectors. Then: A 1 2 v1 v2 = vn v1 v2 vn n AV = VD

Spectra and diagonalization Therefore, A = VDV1, where D is diagonal A represents a scaling along the eigenvector axes! 1 A 1 2 = v1 v2 vn v1 v2 vn n A = VDV1

Spectra and diagonalization A is called normal if AAT = ATA. Important example: symmetric matrices A = AT. It can be proved that normal nn matrices have exactly n linearly independent eigenvectors (over C). If A is symmetric, all eigenvalues of A are real, and it has n independent real orthonormal eigenvectors.

Spectra and diagonalization rotation reflection other orthonormal basis orthonormal basis

Spectra and diagonalization

Spectra and diagonalization

Spectra and diagonalization

Spectra and diagonalization

Why SVD… Diagonalizable matrix is essentially a scaling Most matrices are not diagonalizable – they do other things along with scaling (such as rotation) So, to understand how general matrices behave, only eigenvalues are not enough SVD tells us how general linear transformations behave, and other things…

See you next time