Download presentation
Presentation is loading. Please wait.
Published byBertram Armstrong Modified over 9 years ago
1
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers
2
Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). For a number x, we have |x| as a measurement of the magnitude of x. For a vector x, it is not clear what the “ best ” measurement of size should be. Note: we will use bold-face type to denote a vector. ( x )
3
Vector Norms Example: x = ( 4, -1 ) is the standard Pythagorean length of x. This is one possible measurement of the size of x. x
4
Vector Norms Example: x = ( 4, -1 ) |4| + |-1|=5 is the “ Taxicab ” length of x. This is another possible measurement of the size of x. x
5
Vector Norms Example: x = ( 4, -1 ) max(|4|,|-1|) =4 is yet another possible measurement of the size of x. x
6
Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). Definition: A vector norm is a function that takes a vector and returns a non-zero number. We denote the norm of a vector x by The norm must satisfy: – Triangle Inequality: – Scalar: – Positive:,and = 0 only when x is the zero vector.
7
Our previous examples for vectors in R n : All of these satisfy the three properties for a norm. Vector Norms
8
Vector Norms Example
9
Definition: The L p norm generalizes these three norms. For p > 0, it is defined on R n by: p=1 L 1 norm p=2 L 2 norm p= ∞ L ∞ norm Vector Norms
10
Distance
11
The answer depends on the application. The 1-norm and ∞-norm are good whenever one is analyzing sensitivity of solutions. The 2-norm is good for comparing distances of vectors. There is no one best vector norm! Which norm is best?
12
In Matlab, the norm function computes the L p norms of vectors. Syntax: norm(x, p) >> x = [ 3 4 -1 ]; >> n = norm(x,2) n = 5.0990 >> n = norm(x,1) n = 8 >> n = norm(x, inf) n = 4 Matlab Vector Norms
13
Definition: Given a vector norm ||x|| the matrix norm defined by the vector norm is given by: What does a matrix norm represent? It represents the maximum “ stretching ” that A does to a vector x -> (Ax). Matrix Norms
14
Note that, since ||x|| is a scalar, we have Since is a unit vector, we see that the matrix norm is the maximum value of Az where z is on the unit ball in R n. Thus, ||A|| represents the maximum “stretching” possible done by the action Ax. Matrix Norm “Stretch”
15
Theorem A: The matrix norm corresponding to 1-norm is maximum absolute column sum: Proof: From the previous slide, we have Also, where A j is the j-th column of A. Matrix 1- Norm
16
Proof (continued): Then, Let x be a vector with all zeroes, except a 1 in the spot where ||A j || is a max. Then, we get equality above. □ Matrix 1- Norm
17
Theorem B: Matrix norm corresponding to ∞ norm is maximum absolute row sum: Proof (similar to Theorem A). Matrix Norms
18
|| A || > 0 if A ≠ O || A || = 0 iff A = O || c A || = | c| * ||A || if A ≠ O || A + B || ≤ || A || + || B || || A B || ≤ || A || * ||B || || A x || ≤ || A || * ||x || Matrix Norm Properties
19
The eigenvectors of a matrix are vectors that satisfy Ax = λx Or, (A – λI)x = 0 So, λ is an eigenvalue iff det(A – λI) = 0 Example: Eigenvalues-Eigenvectors
20
The spectral radius of a matrix A is defined as ρ(A) = max |λ| where λ is an eigenvalue of A In our previous example, we had So, the spectral radius is 1. Spectral Radius
21
Theorem 1: If ρ(A)<1, then Proof: We can find a basis for R n by unit eigenvectors (result from linear algebra), say {e 1, e 2, …, e n }. Then, For any unit vector x, we have x = a 1 e 1 + a 2 e 2 + … + a n e n Then, A n x = a 1 A n e 1 + a 2 A n e 2 + … + a n A n e n = a 1 λ 1 n e 1 + a 2 λ 2 n e 2 + … + a n λ n n e n Thus, Since ρ(A)<1, then the result must hold. □ Convergence
22
Theorem 2: If ρ(B)<1, then (I-B) -1 exists and (I-B) -1 = I + B + B 2 + · · · Proof: Since we have Bx = λx exactly when (I-B)x = (1- λ )x, then λ is an eigenvalue of B iff (1- λ) is an eigenvalue of (I-B). Now, we know that |λ|<1, so 0 cannot be an eigenvalue of (I-B). Thus, (I-B) is invertible (why?). Let S p = I + B + B 2 + · · ·+B p Then, (I-B) S p = (I + B + B 2 + · · ·+B p ) – (B + B 2 + · · ·+B p+1 ) = (I- B p+1 ) Since ρ(A)<1, then by Theorem 1, the term B p+1 will go to the zero matrix as p goes to infinity. □ Convergent Matrix Series
23
Recall: Our general iterative formula to find x was Q x (k+1) = ωb + (Q-ωA) x (k) where Q and ω were variable parameters. We can re-write this as x (k+1) = Q -1 (Q-ωA) x (k) + Q -1 ωb Let B = Q -1 (Q-ωA) and c = ωb Then, our iteration formula has the general form: x (k+1) = B x (k) + c Convergence of Iterative solution to Ax=b
24
Theorem 3: For any x 0 in R n, the iteration formula given by x (k+1) = Bx (k) + c will converge to the unique solution of x=Bx+c (i.e fixed point) iff ρ(B)<1. Proof: If ρ(B)<1, the term B k+1 x 0 will vanish. Also, the remaining term will converge to (I-B) -1. Thus, { x (k+1) } converges to z = (I-B) -1 c, or z-Bz = c or z = Bz + c. The converse proof can be found in Burden and Faires, Numerical Analysis. □ Convergence of Iterative solution to Ax=b
25
Def: A matrix A is called Diagonally Dominant if the magnitude of the diagonal element is larger than the sum of the absolute values of the other elements in the row, for all rows. Example: Diagonally Dominant Matrices
26
Recall: Jacobi Method x (k+1) = D -1 (b + (D-A) x (k) ) = D -1 (D-A) x (k) + D -1 b Theorem 4: If A is diagonally dominant, then the Jacobi method converges to the solution of Ax=b. Proof: Let B = D -1 (D-A) and c = D -1 b. Then, we have x (k+1) = B x (k) + c. Consider the L ∞ norm of B, which is equal to Jacobi Method
27
Proof: (continued) Then, If A is diagonally dominant, then the terms we are taking a max over are all less than 1. So, the L ∞ norm of B is <1. We will now show that this implies that the spectral radius is <1. Jacobi Method
28
Lemma: ρ(A)<||A|| for any matrix norm. Proof: Let λ be an eigenvalue with unit eigenvector x. □ Proof of Theorem 4 (cont): Since we have shown that then, by the Lemma, we have that ρ(B) < 1. By Theorem 3, the iteration method converges. □ Jacobi Method
29
Through similar means we can show (no proof): Theorem 5: If A is diagonally dominant, then the Gauss- Seidel method converges to the solution of Ax=b. Gauss-Seidel Method
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.