Download presentation
Presentation is loading. Please wait.
Published byCaitlin Craig Modified over 9 years ago
1
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology
2
2 Today’s Topics Linear Algebra SVD Affine Algebra
3
3 Singular Value Decomposition The SVD is a highlight of linear algebra. Typical Applications of SVD Solving a system of linear equations Compression of a signal, an image, etc. SVD approach can give an optimal low rank approximation of a given matrix A. Ex) Replace the 256 by 512 pixel matrix by a matrix of rank one: a column times a row.
4
4 Singular Value Decomposition Overview of SVD A is any m by n matrix, square or rectangular. We will diagonalize it. Its row and column spaces are r-dim. We choose special orthonormal bases v 1,…v r for the row space and u 1,…,u r for the column space. For those bases, we want each Av i to be in the direction of u i. In matrix form, these equations Av i =σ i u i become AV=UΣ or A=UΣV T. This is the SVD.
5
5 Singular Value Decomposition The Bases and the SVD Start with a 2 by 2 matrix. Its rank is 2. This matrix A is invertible. Its row space is the plane R 2. We want v 1 and v 2 to be perpendicular unit vectors, an orthonormal basis. We also want Av 1 and Av 2 to be perpendicular We also want Av 1 and Av 2 to be perpendicular. Then the unit vectors u 1 and u 2 of Av 1 and Av 2 are orthonormal.
6
6 Singular Value Decomposition The Bases and the SVD We are aiming for orthonormal bases that diagonalize A. When the inputs are v 1 and v 2, the outputs are Av 1 and Av 2. We want those to line up with u 1 and u 2, respectively. The basis vectors have to give Av 1 = σ 1 u 1 and also Av 2 = σ 2 u 2 The basis vectors have to give Av 1 = σ 1 u 1 and also Av 2 = σ 2 u 2. The singular values σ 1 and σ 2 are the lengths |Av 1 | and |Av 2 |.
7
7 Singular Value Decomposition The Bases and the SVD With v 1 and v 2 as columns of V, In matrix notation, that is AV=UΣ, or U -1 AV = Σ or U T AV = Σ. Σ contains the singular values, which are different from the eigenvalues.
8
8 Singular Value Decomposition In SVD, U and V must be orthogonal matrices. Orthonormal basis V T V = I. V T = V -1. U T = U -1 This is the new factorization of A: orthogonal times diagonal times orthogonal.
9
9 Singular Value Decomposition There is a way to remove U and see V by itself.: Multiply A T times A. A T A = (UΣV T ) T (UΣV T ) = VΣ T ΣV T This becomes an ordinary diagonalization of the crucial symmetric matrix A T A, whose eigenvalues are σ 1 2,σ 2 2. The columns of V are the eigenvectors of A T A. <- This is how we find V.
10
10 Singular Value Decomposition Working Example
11
11 Singular Value Decomposition In many cases where Gaussian elimination and LU decomposition fail to give satisfactory results, SVD can diagnose for you precisely what the problem is. In some cases, SVD gives you a useful numerical answer. Although it is not necessarily “THE” answer.
12
12 Singular Value Decomposition Any M ⅹ N matrix A (M ⅹ N) can be written as the product of an M ⅹ N column-orthogonal matrix U, an N ⅹ N diagonal matrix W with positive or zero elements, and the transpose of an N ⅹ N orthogonal matrix V. U and V are each orthogonal in the sense that their columns are orthogonal.
13
13 Singular Value Decomposition The decomposition can always be done no matter how singular the matrix is. In “Numerical Recipes in C”, there is a routine called “svdcmp” that performs SVD on an arbitrary matrix A, replacing it by U and giving back Σ and V separately.
14
14 Singular Value Decomposition If the matrix A is square, U, V and W are all square matrices of the same size. Their inverses are trivial to compute. U and V are orthogonal Their inverses are equal to their transposes. Σ is diagonal Its inverse is the diagonal matrix whose elements are the reciprocals of the elements.
15
15 Singular Value Decomposition The only thing that you can go wrong with this inverse computation is for one of the σ j ’s to be zero or for it to be so small that its value is dominated by roundoff error and therefore unknowable. If more than one of the singular values have such problems, then the matrix is more singular. So, SVD gives you a clear diagnosis of the situation.
16
16 Singular Value Decomposition Condition Number of a matrix The ratio of the largest (in magnitude) of the σ j ’s to the smallest of the σ j ’s. A matrix is singular if its condition number is infinite. A matrix is ill-conditioned if its condition number is too large. If its reciprocal approaches the machine’s floating- point precision.
17
17 Singular Value Decomposition For singular matrices, the concept of nullspace and range are important. Given A ᆞ x = b If A is singular, There is some subspace of x, called the null space, that is mapped to zero, i.e. A ᆞ x = 0. The dimension of the nullspace (the number of linearly independent vectors x, satisfying Ax=0) is called the nullity of A.
18
18 Singular Value Decomposition For singular matrices, the concept of nullspace and range are important. Given A ᆞ x = b There is also some subspace of b that can be reached by A. There exists some x which is mapped there. This subspace of b is called the range of A. The dimension of the range is called the rank of A.
19
19 Singular Value Decomposition If A is nonsingular, then its range will be all of the vector space b, so its rank is N. If A is singular, then the rank will be less than N. Therefore, for an N ⅹ N matrix, the rank plus the nullity of the matrix equals N. What has this to do with SVD?
20
20 Singular Value Decomposition SVD explicitly constructs orthonormal bases for the nullspace and range of a matrix. The columns of U whose same-numbered elements σ j are nonzero are an orthonormal set of basis vectors that span the range. The columns of V whose same-numbered elements σ j are zero are an orthonormal basis for the nullspace.
21
21 Singular Value Decomposition When solving the set of simultaneous linear equations with a singular matrix A SVD can solve the set of homogeneous equations, i.e. b = 0 Any column of V whose corresponding σ j is zero. When the vector b on the right-hand side is not zero, The most important question is whether it lies in the range of A or not. If it does, the singular set of equations does have a solution x. -> More than one solution.
22
22 Singular Value Decomposition When the vector b on the right-hand side is not zero, The most important question is whether it lies in the range of A or not. If it does, the singular set of equations does have a solution x. -> More than one solution. If we want to single out one particular member of this solution set of vectors, we may want to pick the one with the smallest length |x| 2 Simply replace 1/σ j by zero if σ j = 0. Then compute
23
23 Singular Value Decomposition If b is not in the range of the singular matrix A, then Ax=b has no solution. But it can still be used to construct a “solution” vector x. Namely, it will give the closest possible solution in the least squares sense.
24
24 Singular Value Decomposition Numerically, the far more common situation is… Some of the singular values are very small BUT nonzero. The matrix is ill-conditioned. The direct solution methods of LU decomposition or Gaussian elimination may actually give a formal solution to the problem.-> No zero pivot. Algebraic cancellation due to the limited precision may give a very poor approximation to the true solution.
25
25 Singular Value Decomposition In such cases, the solution vector x obtained by zeroing the small singular values and using is very often better than both the direct methods. Zeroing a singular value corresponds to throwing away one linear combination of the equations. But its contribution is small, leading to a good approximation. SVD cannot be applied blindly. Should decide what threshold is appropriate.
26
26 Singular Value Decomposition Although you do not need to zero any singular values for computational reasons, you should at least take note of any that are unusually small. Their corresponding columns in V are linear combination of x’s which are insensitive to your data.
27
27 Factorization Based on Elimination Triangular Factorization without row exchange: A = LDU L : lower triangular U : upper triangular D : diagonal. Elements are nonzero. They come directly from elimination. The factorization succeeds only if the pivots are not zero.
28
28 Factorization Based on Elimination Triangular Factorization with row exchange: PA = LDU P : permutation matrix Reorder the rows to achieve nonzero pivots. L : lower triangular U : upper triangular D : diagonal. Elements are nonzero.
29
29 Factorization Based on Elimination Reduction to echelon form PA = LU Every rectangular matrix A can be changed by row operations into a matrix U that has zeros below its pivots. The last m-r rows of U are entirely zero. L: square matrix D and U are combined into a single rectangular matrix with the same shape as A.
30
30 Factorization Based on Eigenvalues Diagonalization of A: A = SΛS -1 If A has a full set of n linearly independent eigenvectors. They are the colums of S. S -1 AS is the diagonal matrix of eigenvalues. With A T A = AA T, the eigenvectors can be chosen orthonormal and S becomes Q. Namely A = QΛQ -1.
31
31 Factorization Based on Eigenvalues Jordan Form (Jordan Decomposition): A = MJM -1 Every square matrix A is similar to a Jordan matrix J, with the eigenvalues J ii = λ i on the diagonal. J has a diagonal block of the form for each independent eigenvector.
32
32 Factorization Based on A T A Orthogonalization of the columns of A: A = QR A must have independent columns a 1, …, a n. Q has orthonormal columns q 1,…,q n.
33
33 Factorization Based on A T A Singular Value Decomposition Polar Decomposition: A = QB A is split into an orthogonal matrix Q and a symmetric positive definite matrix B. B is the positive definite square root of A T A. Q = AB -1
34
34 Factorization Based on A T A
35
35 Affine Algebra Linear Algebra is the study of vectors and vector spaces. A vector was treated as a quantity with direction and magnitude. Two vectors have the same directions and magnitudes are the same irrespective of their positions. What if the location of the vector, namely, where its initial point is, is very important? In physics applications, sometimes the location matters.
36
36 Affine Algebra There needs to be a distinction between points and vectors -> The essence of affine algebra. Let V be a vector space of dimension n. Let A be a set of elements that are called points.
37
37 Affine Algebra Let V be a vector space of dimension n. Let A be a set of elements that are called points. Then A is referred to as an n-dimensional affine space whenever the following conditions are met: For each ordered pair of points P,Q ∈ A, there is a unique vector in V called the difference vector and denoted by Δ( P,Q ) For each point P ∈ A and v ∈ V, there is a unique point Q ∈ A such that v = Δ( P,Q ). For any three points P,Q,R ∈ A, it must be that Δ( P,Q )+Δ( Q,R )=Δ( P,R ).
38
38 Affine Algebra From the formal definition for an affine space, we have Δ( P,P ) = 0. Δ( P,Q ) = -Δ( Q,P ). If Δ( P 1,Q 1 ) = Δ( P 2,Q 2 ), then Δ( P 1,P 2 ) = Δ( Q 1,Q 2 ).
39
39 Affine Algebra Coordinate Systems Let A be an n-dimensional affine space. Let a fixed point O ∈ A be labeled as the origin. Let {v 1,…,v n } be a basis for V. Then the set { O ;v 1,…,v n } is called an affine coordinate system. The numbers (a 1,…,a n ) are called the affine coordinates of P relative to the specified coordinate system.
40
40 Affine Algebra Coordinate Systems Change coordinates from one system to another. { O 1 ;u 1,…,u n } and { O 2 ;v 1,…,v n } for A. A point P ∈ A has affine coordinates (a 1,…,a n ) and (b 1,…,b n ). The origin O 2 has affine coordinates (c 1,…,c n ) in the first coordinate system.
41
41 Affine Algebra Coordinate Systems The two bases are related by a change of basis transformation u i = Σm ji v j.
42
42 Affine Algebra Subspaces Let A be an affine space. An affine subspace of A is a set A 1 ⊆ A such that V 1 = {Δ( P,Q ) ∈ V: P,Q ∈ S} is a subspace of V.
43
43 Affine Algebra Transformation Definition of Affine Transformations for Affine Spaces Let A be an affine space with vector space V and vector difference operator Δ A. Let B be an affine space with vector space W and vector difference operator Δ B. An affine transformation is a function T: A->B such that Δ A ( P 1,Q 1 )= Δ A ( P 2,Q 2 ) implies that Δ B (T( P 1 ), T( Q 1 ))= Δ B (T( P 2 ), T( Q 2 )). The function L : V->W defined by L(Δ A ( P,Q ))= Δ B (T( P ), T( Q )) is a linear transformation.
44
44 Affine Algebra Transformation If O A is selected as the origin for A and if O B = T( O A ) is selected as the origin for B, then the affine transformation is of the form T( O A +x) = T( O A ) + L(x) = O B + L(x) If B=A, W=V and B A. Given O B - O A = b. Then T( O A + x) = O A + b + L(x). For a fixed origin O A and for a specific matrix representation M of the linear transformation L, we have y = Mx + b: rigid motion M: orthogonal matrix -> rotation
45
45 Affine Algebra Barycentric Coordiantes An operation on two points: a weighted average of two points R = (1-t) P + t Q. R is said to be a barycentric combination of P and Q with barycentric coordinates 1-t and t. The sum of the barycentric coordinates is one.<- A necessity for a pair of numbers to be barycentric coordinates. R is a point on the line segment connecting P and Q.
46
46 Affine Algebra Barycentric Coordiantes Triangles Given three noncolinear points P, Q and R. Then, P + su + tv = P +s( Q - P ) + t( R - P ) is a point. Then B = (1-s-t) P + s Q + t R is a barycentric combination of P, Q and R with barycentric coordinates c 1 = 1-s-t, c 2 = s, c 3 = t. The coordinates cannot be simultaneously negative since the sum of three negative numbers cannot be 1. It is a useful tool for describing the location of a point in a triangle.
47
47 Affine Algebra Barycentric Coordiantes Tetrahedra Given four noncoplanar points P i, (1 ≤ i ≤ 3), a barycentric combination of the points is B = (1-c 1 -c 2 - c 3 ) P 0 + c 1 P 1 + c 2 P 2 + c 3 P 3. Simplices A simplex is formed by n+1 points P i, 1 ≤ i ≤ n, such that the set of vectors { P i - P 0 } are linearly independent. A barycentric combination of the points is
48
48 Q & A?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.