Download presentation
1
Computer Graphics Recitation 5
2
Motivation – Shape Matching
What is the best transformation that aligns the dinosaur with the dog?
3
Motivation – Shape Matching
Regard the shapes as sets of points and try to “match” these sets using a linear transformation The above is not a good matching….
4
Motivation – Shape Matching
Regard the shapes as sets of points and try to “match” these sets using a linear transformation To find the best transformation we need to know SVD…
5
Finding underlying dimensionality
x Given a set of points, find the best line that approximates it – reduce the dimension of the data set…
6
Finding underlying dimensionality
x’ x Given a set of points, find the best line that approximates it – reduce the dimension of the data set…
7
Finding underlying dimensionality
x’ x When we learn PCA (Principal Component Analysis), we’ll know how to find these axes that minimize the sum of distances2
8
PCA and SVD PCA and SVD are important tools not only in graphics but also in statistics, computer vision and more. To learn about them, we first need to get familiar with eigenvalue decomposition. So, today we’ll start with linear algebra basics reminder.
9
The plan today Basic linear algebra review.
Eigenvalues and eigenvectors.
10
Vector space Informal definition: V (a non-empty set of vectors)
v, w V v + w V (closed under addition) v V, is scalar v V (closed under multiplication by scalar) Formal definition includes axioms about associativity and distributivity of the + and operators.
11
Subspace - example Let l be a 2D line though the origin
L = {p – O | p l} is a linear subspace of R2 y O x
12
Subspace - example Let be a plane through the origin in 3D
V = {p – O | p } is a linear subspace of R3 z O y x
13
Linear independence The vectors {v1, v2, …, vk} are a linearly independent set if: 1v1 + 2v2 + … + kvk = 0 i = 0 i It means that none of the vectors can be obtained as a linear combination of the others.
14
Linear independence - example
Parallel vectors are always dependent: Orthogonal vectors are always independent. v = 2.4 w v + (2.4)w = 0 v w
15
V = {1v1 + 2v2 + … + nvn | i is scalar}
Basis {v1, v2, …, vn} are linearly independent {v1, v2, …, vn} span the whole vector space V: V = {1v1 + 2v2 + … + nvn | i is scalar} Any vector in V is a unique linear combination of the basis. The number of basis vectors is called the dimension of V.
16
Basis - example The standard basis of R3 – three unit orthogonal vectors x, y, z: (sometimes called i, j, k or e1, e2, e3) z y x
17
Matrix representation
Let {v1, v2, …, vn} be a basis of V Every vV has a unique representation v = 1v1 + 2v2 + … + nvn Denote v by the column-vector: The basis vectors are therefore denoted:
18
Linear operators A : V W is called linear operator if:
A(v + w) = A(v) + A(w) A(v) = A(v) In particular, A(0) = 0 Linear operators we know: Scaling Rotation, reflection Translation is not linear – moves the origin
19
Linear operators - illustration
Rotation is a linear operator: R(v+w) v+w w w v v
20
Linear operators - illustration
Rotation is a linear operator: R(v+w) v+w R(w) w w v v
21
Linear operators - illustration
Rotation is a linear operator: R(v+w) v+w R(v) R(w) w w v v
22
Linear operators - illustration
Rotation is a linear operator: R(v+w) R(v)+R(w) v+w R(v) R(w) w w v v R(v+w) = R(v) + R(w)
23
Matrix representation of linear operators
Look at A(v1),…, A(vn) where {v1, v2, …, vn} is a basis. For all other vectors: v = 1v1 + 2v2 + … + nvn A(v) = 1A(v1) + 2A(v2) + … + nA(vn) So, knowing what A does to the basis is enough The matrix representing A is:
24
Matrix representation of linear operators
25
Matrix operations Addition, subtraction, scalar multiplication – simple… Matrix by column vector:
26
Matrix operations Transposition: make the rows columns (AB)T = BTAT
27
Matrix operations Inner product can in matrix form:
28
Matrix properties Matrix A (nn) is non-singular if B, AB = BA = I
B = A-1 is called the inverse of A A is non-singular det A 0 If A is non-singular then the equation Ax=b has one unique solution for each b. A is non-singular the rows of A are linearly independent set of vectors.
29
Vector product v w is a new vector: In matrix form in 3D:
||v w|| = ||v|| ||w|| |sin(v,w)| v w v, v w w v, w, v w is a right-hand system In matrix form in 3D: vw v w
30
Orthogonal matrices Matrix A (nn) is orthogonal if A-1 = AT
Follows: AAT = ATA = I The rows of A are orthonormal vectors! Proof: v1 I = ATA = v2 v1 v2 vn = viTvj = ij vn <vi, vi> = 1 ||vi|| = 1; <vi, vj> = 0
31
Orthogonal operators A is orthogonal matrix A represents a linear operator that preserves inner product (i.e., preserves lengths and angles): Therefore, ||Av|| = ||v|| and (Av,Aw) = (v,w)
32
Orthogonal operators - example
Rotation by around the z-axis in R3 : In fact, any orthogonal 33 matrix represents a rotation around some axis and/or a reflection detA = rotation only detA = 1 with reflection
33
Eigenvectors and eigenvalues
Let A be a square nn matrix v is eigenvector of A if: Av = v ( is a scalar) v 0 The scalar is called eigenvalue Av = v A(v) = (v) v is also eigenvector Av = v, Aw = w A(v+w) = (v+w) Therefore, eigenvectors of the same form a linear subspace.
34
Eigenvectors and eigenvalues
An eigenvector spans an axis that is invariant to A – it remains the same under the transformation. Example – the axis of rotation: Eigenvector of the rotation transformation O
35
Finding eigenvalues For which is there a non-zero solution to Ax = x ? Ax = x Ax – x = 0 Ax – Ix = 0 (A- I)x = 0 So, non trivial solution exists det(A – I) = 0 A() = det(A – I) is a polynomial of degree n. It is called the characteristic polynomial of A. The roots of A are the eigenvalues of A. Therefore, there are always at least complex eigenvalues. If n is odd, there is at least one real eigenvalue.
36
Example of computing A
Cannot be factorized over R Over C:
37
Computing eigenvectors
Solve the equation (A – I)x = 0 We’ll get a subspace of solutions.
38
Spectra and diagonalization
The set of all the eigenvalues of A is called the spectrum of A. A is diagonalizable if A has n independent eigenvectors. Then: A 1 2 v1 v2 = vn v1 v2 vn n AV = VD
39
Spectra and diagonalization
Therefore, A = VDV1, where D is diagonal A represents a scaling along the eigenvector axes! 1 A 1 2 = v1 v2 vn v1 v2 vn n A = VDV1
40
Spectra and diagonalization
A is called normal if AAT = ATA. Important example: symmetric matrices A = AT. It can be proved that normal nn matrices have exactly n linearly independent eigenvectors (over C). If A is symmetric, all eigenvalues of A are real, and it has n independent real orthonormal eigenvectors.
41
Spectra and diagonalization
rotation reflection other orthonormal basis orthonormal basis
42
Spectra and diagonalization
43
Spectra and diagonalization
44
Spectra and diagonalization
45
Spectra and diagonalization
46
Why SVD… Diagonalizable matrix is essentially a scaling
Most matrices are not diagonalizable – they do other things along with scaling (such as rotation) So, to understand how general matrices behave, only eigenvalues are not enough SVD tells us how general linear transformations behave, and other things…
47
See you next time
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.