Download presentation
Presentation is loading. Please wait.
Published byAlisha Farmer Modified over 8 years ago
1
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology
2
2 Today’s Topics Linear Algebra Vector Spaces Determinants Eigenvalues and Eigenvectors SVD
3
3 Linear Combinations, Spans and Subspaces Linear Combination of Vectors x k ∈ V, c k ∈ R for k=1,…,n. The span of A is the set Let V is a vector space and let A ⊂ V. The set of all finite linear combinations of vectors in A
4
4 Linear Combinations, Spans and Subspaces If a nonempty subset S of a vector space V is itself a vector space, S is said to be a subspace of V.
5
5 Linear Independence and Bases linearly dependent Let (V,+, ᆞ ) be a vector space. The vectors x 1,…,x n ∈ V. The vectors are linearly dependent if for some set of scalars c 1,…,c n which are not all zero. linearly independent If there is no such set of scalars, the vectors are linearly independent.
6
6 Linear Independence and Bases Let V be a vector space with subset A ⊂ V. The set A is said to be a basis for V if A is linearly independent and Span[A]=V. The number of vectors in A is the “smallest”. i.e the cardinality of the set A, |A|, is the smallest. For a vector space V that has a basis B of cardinality |B|=n. If |A| > n, then A is linearly dependent. If Span[A] = V, then |A| ≥ n Every basis of V has cardinality n. The vector space is said to have dimension dim(V) = n.
7
7 Inner Products, Length, Orthogonality and Projection Inner product: = x T y = c + = |x||y|cosθ The length of x : |x| = 1/2 Orthogonality of vectors Two vectors x and y are orthogonal if and only if = 0. Projection: inner product is closely related. The projection of v onto u is proj(v,u) = u.
8
8 Inner Products, Length, Orthogonality and Projection Projection If d is not a unit vector, projection is still well- defined.
9
9 Inner Products, Length, Orthogonality and Projection A set of nonzero vectors consisting of mutually orthogonal vectors (each pair is orthogonal) must be a linearly independent set. If all the vectors in the set are unit length, the set is said to be an orthonormal set of vectors. It is always possible to construct an orthonormal set of vectors from any linearly independent set of vectors. Gram-Schmidt orthonormalization.
10
10 Inner Products, Length, Orthogonality and Projection Gram-Schmidt Orthonormalization Iterative process
11
11 Cross Product, Triple Products Cross Product of the vectors u and v: u ⅹ v The cross product is not commutative but anti-commutative. Triple Scalar Product of the vectors u,v,w: Triple vector product fo the vectors u,v,w: u ⅹ (v ⅹ w) = sv + tw (It lies in the v-w plane.) Determination of s and t
12
12 Orthogonal Subspaces Let U and V be subspaces of R n. The subspaces are said to be orthogonal subspaces if = 0 for every x ∈ U and every y ∈ V. Orthogonal complement of U U ┸ = The largest dimension subspace V of R n that is orthogonal to a specified subspace U.
13
13 Rank The rank of A is the number of pivots, which is denoted as r. The true size of a matrix Identical rows If row 3 is a combination of rows 1 and 2 A matrix A has full row rank if every row has a pivot. No zero rows. A matrix A has full column rank if every column has a pivot.
14
14 Kernel of A Define the kernel or nullspace of A to be the set Kernel(A) = {x ∈ R m : Ax = 0} A basis for kernel(A) is constructed by solving the system Ax = 0.
15
15 Range of A A can be written as a block matrix of n ⅹ 1 column vectors A = [a 1 |…|a m ]. The expression Ax can be given as a linear combination of these column vectors. Treating A as a function A: R m -> R n, the range of the function is
16
16 Fundamental Theorem of Linear Algebra If A is an n ⅹ m matrix with kernel(A) and range(A), and if A T is the m ⅹ n transpose of A with kernel(A T ) and range(A T ), then Kernel(A) = range(A T ) ┸ Kernel(A) ┸ = range(A T ) Kernel(A T ) = range(A) ┸ Kernel(A T ) ┸ = range(A)
17
17 Projection and Least Squares The projection p of a vector b ∈ R n onto a line through the origin with direction a p = a(a T a) -1 a T b The line is a one-dimensional subspace. Projection of b onto a subspace S is equivalent to finding the point in S that is closest to b.
18
18 Projection and Least Squares The construction of a projection onto a subspace is motivated by attempting to solve the system of linear equations Ax=b A solution exists if and only if b ∈ range(A). If b is not in the range of A, an application might be satisfied with a vector x that is “close enough” Find an x so that Ax-b is as close to the zero vector as possible. -> Find x that minimizes the length |Ax-b| 2. The least squares problem
19
19 Projection and Least Squares If the squared distance has a minimum of zero, any such x must be a solution to the linear system. Geometrically, the minimizing process amounts to finding the point p ∈ range(A) that is closest to b. Can be obtained through projection.
20
20 Projection and Least Squares There is also always a point q ∈ range(A) = kernel(A T ) such that the distance from b to kernel(A T ) is a minimum. It is obvious that the quantity |Ax-b| 2 is minimized if and only if Ax-b ∈ kernel(A T ). |Ax-b| 2 is a minimum and A T (Ax-b)=0. A T Ax = A T b : normal equations corresponding to the linear system Ax=b. The projection of b onto range(A) is p = Ax=A(A T A) -1 A T b
21
21 Linear Transformations Let V and W be vector spaces. A function L: V -> W is said to be a linear transformation whenever L(x+y) = L(x) + L(y) for all x,y ∈ V L(cx) = cL(x) for all c ∈ R and for all x ∈ V
22
22 Determinants A determinant is a scalar quantity associated with a square matrix. Geometric Interpretation 2 ⅹ 2 matrix: the area of a parallelogram formed by the column vectors. 3 ⅹ 3 matrix: the volume of a parallelepiped formed by the column vectors.
23
23 Determinants A determinant is a scalar quantity associated with a square matrix. Geometric Interpretation 2 ⅹ 2 matrix: the area of a parallelogram formed by the column vectors. 3 ⅹ 3 matrix: the volume of a parallelepiped formed by the column vectors. How to compute the determinant of a matrix A?
24
24 Eigenvalues and Eigenvectors Let A be an n ⅹ n matrix of complex-valued entries. The scalar λ ∈ C is said to be an eigenvalue of A if there is a nonzero vector x such that Ax= λx. In this case, x is said to be an eigenvector corresponding to λ. Geometrically, an eigenvector is a vector that when transformed does not change direction.
25
25 Eigenvalues and Eigenvectors Let λ be an eigenvalue of a matrix A. The eigenspace of λ is the set S λ = {x ∈ C n : Ax = λx} To find eigenvalues and eigenvectors Ax – λIx = 0 (A- λI)x = 0. Nonzero solutions x. Solve a characteristic equation det(A- λI) = 0.
26
26 Eigendecomposition for Symmetric Matrices A symmetric matrix n ⅹ n with real-valued entries arises most frequently in applications.
27
27 Eigendecomposition for Symmetric Matrices The eigenvalues of a real-valued symmetric matrix must be real-valued and the corresponding eigenvectors are naturally real- valued. If λ 1 and λ 2 are distinct eigenvalues for A, then the corresponding eigenvectors x 1 and x 2 are orthogonal.
28
28 Eigendecomposition for Symmetric Matrices If A is a square matrix, there always exists an orthogonal matrix Q (eigenvector matrix) such that Q T AQ = U, where U is an upper triangular matrix. The diagonal entries of U are necessarily the eigenvalues of A. If A is symmetric and Q T AQ = U, then U must be a diagonal matrix.
29
29 Eigendecomposition for Symmetric Matrices The symmetric matrix A is positive, nonnegative, negative, nonpositive definite if and only if its eigenvalues are positive, nonnegative, negative, nonpositive. The product of the n eigenvalues equals the determinant of A The sum of the n eigenvalues equals the sum of the n diagonal entries of A.
30
30 Eigendecomposition for Symmetric Matrices Eigenvectors x 1,…,x j that correspond to distinct eigenvalues are linearly independent. An n by n matrix that has n different eigenvalues must be diagonalizable.
31
31 Eigendecomposition for Symmetric Matrices Let M be any invertible matrix. Then B=M - 1 AM is similar to A. No matter which M we choose, A and B have the same eigenvalues. If x is an eigenvector of A then M -1 x is an eigenvector of B.
32
32 Singular Value Decomposition The SVD is a highlight of linear algebra. Typical Applications of SVD Solving a system of linear equations Compression of a signal, an image, etc. SVD approach can give an optimal low rank approximation of a given matrix A. Ex) Replace the 256 by 512 pixel matrix by a matrix of rank one: a column times a row.
33
33 Singular Value Decomposition Overview of SVD A is any m by n matrix, square or rectangular. We will diagonalize it. Its row and column spaces are r-dim. We choose special orthonormal bases v 1,…v r for the row space and u 1,…,u r for the column space. For those bases, we want each Av i to be in the direction of u i. In matrix form, these equations Av i =σ i u i become AV=UΣ or A=UΣV T. This is the SVD.
34
34 Singular Value Decomposition The Bases and the SVD Start with a 2 by 2 matrix. Its rank is 2. This matrix A is invertible. Its row space is the plane R 2. We want v 1 and v 2 to be perpendicular unit vectors, an orthonormal basis. We also want Av 1 and Av 2 to be perpendicular We also want Av 1 and Av 2 to be perpendicular. Then the unit vectors u 1 and u 2 of Av 1 and Av 2 are orthonormal.
35
35 Singular Value Decomposition The Bases and the SVD We are aiming for orthonormal bases that diagonalize A. When the inputs are v 1 and v 2, the outputs are Av 1 and Av 2. We want those to line up with u 1 and u 2. The basis vectors have to give Av 1 = σ 1 u 1 and also Av 2 = σ 2 u 2 The basis vectors have to give Av 1 = σ 1 u 1 and also Av 2 = σ 2 u 2. The singular values σ 1 and σ 2 are the lengths |Av 1 | and |Av 2 |.
36
36 Singular Value Decomposition The Bases and the SVD With v 1 and v 2 as columns of V, In matrix notation, that is AV=UΣ, or U -1 AV = Σ or U T AV = Σ. Σ contains the singular values, which are different from the eigenvalues.
37
37 Singular Value Decomposition In SVD, U and V must be orthogonal matrices. Orthonormal basis V T V = I. V T = V -1. U T = U -1 This is the new factorization of A: orthogonal times diagonal times orthogonal.
38
38 Singular Value Decomposition There is a way to remove U and see V by itself.: Multiply A T times A. A T A = (UΣV T ) T (UΣV T ) = VΣ T ΣV T This becomes an ordinary diagonalization of the crucial symmetric matrix A T A, whose eigenvalues are σ 1 2,σ 2 2. The columns of V are the eigenvectors of A T A. <- This is how we find V.
39
39 Singular Value Decomposition Working Example
40
40 Q & A?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.