Numerical Computation Lecture 9: Vector Norms and Matrix Condition Numbers United International College.

Slides:



Advertisements
Similar presentations
Ch 7.7: Fundamental Matrices
Advertisements

Linear Equations in Linear Algebra
MATH 685/ CSI 700/ OR 682 Lecture Notes
Scientific Computing Linear Systems – Gaussian Elimination.
1.5 Elementary Matrices and a Method for Finding
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
1 1.8 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra INTRODUCTION TO LINEAR TRANSFORMATIONS.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Chapter 5 Orthogonality
Lecture 6 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
4.I. Definition 4.II. Geometry of Determinants 4.III. Other Formulas Topics: Cramer’s Rule Speed of Calculating Determinants Projective Geometry Chapter.
1 Systems of Linear Equations Error Analysis and System Condition.
Matrices and Systems of Equations
Orthogonality and Least Squares
Linear Equations in Linear Algebra
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Vector Norms CSE 541 Roger Crawfis.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
Autar Kaw Humberto Isaza
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Introduction to MATLAB 7 for Engineers William J. Palm.
Numerical Computation
Introduction to MATLAB for Engineers, Third Edition William J. Palm III Chapter 8 Linear Algebraic Equations PowerPoint to accompany Copyright © 2010.
Scientific Computing Linear Systems – LU Factorization.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. A Concise Introduction to MATLAB ® William J. Palm III.
Rev.S08 MAC 1140 Module 10 System of Equations and Inequalities II.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Square n-by-n Matrix.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Chapter 2 Simultaneous Linear Equations (cont.)
Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Inc.
4.6 Matrix Equations and Systems of Linear Equations In this section, you will study matrix equations and how to use them to solve systems of linear equations.
Copyright © 2007 Pearson Education, Inc. Slide 7-1.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 2. Linear systems.
Chapter 3 Euclidean Vector Spaces Vectors in n-space Norm, Dot Product, and Distance in n-space Orthogonality
Orthogonality and Least Squares
Chapter 3 Vectors in n-space Norm, Dot Product, and Distance in n-space Orthogonality.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Chapter 2 System of Linear Equations Sensitivity and Conditioning (2.3) Solving Linear Systems (2.4) January 19, 2010.
Lecture 8 Matrix Inverse and LU Decomposition
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Matrix Condition Numbers
Chapter 4 Euclidean n-Space Linear Transformations from to Properties of Linear Transformations to Linear Transformations and Polynomials.
8.4 Vectors. A vector is a quantity that has both magnitude and direction. Vectors in the plane can be represented by arrows. The length of the arrow.
Polynomial Functions and Models
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Section 1.7 Linear Independence and Nonsingular Matrices
Linear Systems Dinesh A.
Dr. Mubashir Alam King Saud University. Outline LU Factorization (6.4) Solution by LU Factorization Compact Variants of Gaussian Elimination, (Doolittle’s.
2.5 – Determinants and Multiplicative Inverses of Matrices.
6 6.3 © 2016 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Ltd.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Lecture 11 Inner Product Spaces Last Time Change of Basis (Cont.) Length and Dot Product in R n Inner Product Spaces Elementary Linear Algebra R. Larsen.
College Algebra Chapter 6 Matrices and Determinants and Applications
Linear Equations in Linear Algebra
C H A P T E R 3 Vectors in 2-Space and 3-Space
CHAPTER OBJECTIVES The primary objective of this chapter is to show how to compute the matrix inverse and to illustrate how it can be.
Linear Equations in Linear Algebra
Chapter 7: Matrices and Systems of Equations and Inequalities
Lecture 8 Matrix Inverse and LU Decomposition
Presentation transcript:

Numerical Computation Lecture 9: Vector Norms and Matrix Condition Numbers United International College

Review During our Last Class we covered: – Operation count for Gaussian Elimination, LU Factorization – Accuracy of Matrix Methods – Readings: Pav, section Moler, section 2.8

Today We will cover: – Vector and Matrix Norms – Matrix Condition Numbers – Readings: Pav, section 1.3.2, 1.3.3, Moler, section 2.9

Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). For a number x, we have |x| as a measurement of the magnitude of x. For a vector x, it is not clear what the “ best ” measurement of size should be. Note: we will use bold-face type to denote a vector. ( x )

Vector Norms Example: x = ( 4 -1 ) – is the standard Pythagorean length of x. This is one possible measurement of the size of x. x

Vector Norms Example: x = ( 4 -1 ) – |4| + |-1| is the “ Taxicab ” length of x. This is another possible measurement of the size of x. x

Vector Norms Example: x = ( 4 -1 ) – max(|4|,|-1|) is yet another possible measurement of the size of x. x

Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). Definition: A vector norm is a function that takes a vector and returns a non-zero number. We denote the norm of a vector x by The norm must satisfy: – Triangle Inequality: – Scalar: – Positive:,and = 0 only when x is the zero vector.

Our previous examples for vectors in R n : Manhattan Euclidean Chebyshev All of these satisfy the three properties for a norm. Vector Norms

Vector Norms Example

Definition: The L p norm generalizes these three norms. For p > 0, it is defined on R n by: p=1 L 1 norm p=2 L 2 norm p= ∞ L ∞ norm Vector Norms

Distance

Class Practice: – Find the L 2 distance between the vectors x = (1, 2, 3) and y = (4, 0, 1). – Find the L ∞ distance between the vectors x = (1, 2, 3) and y = (4, 0, 1). Distance

The answer depends on the application. The 1-norm and ∞-norm are good whenever one is analyzing sensitivity of solutions. The 2-norm is good for comparing distances of vectors. There is no one best vector norm! Which norm is best?

In Matlab, the norm function computes the L p norms of vectors. Syntax: norm(x, p) >> x = [ ]; >> n = norm(x,2) n = >> n = norm(x,1) n = 8 >> n = norm(x, inf) n = 4 Matlab Vector Norms

Definition: Given a vector norm the matrix norm defined by the vector norm is given by: Example: Matrix Norms

Example: What does a matrix norm represent? It represents the maximum “ stretching ” that A does to a vector x -> (Ax). Matrix Norms

|| A || > 0 if A ≠ O || c A || = | c| * ||A || if A ≠ O || A + B || ≤ || A || + || B || || A B || ≤ || A || * ||B || || A x || ≤ || A || * ||x || Matrix Norm Properties

Multiplication of a vector x by a matrix A results in a new vector Ax that can have a very different norm from x. The range of the possible change can be expressed by two numbers, =||A|| Here the max, min are over all non-zero vectors x. Matrix

Definition: The condition number of a nonsingular matrix A is given by: κ(A) = M/m by convention if A is singular (m=0) then κ(A) = ∞. Note: If we let Ax = y, then x = A -1 y and Matrix Condition Number

Theorem: The condition number of a nonsingular matrix A can also be given as: κ(A) = || A || * || A -1 || Proof: κ(A) = M/m. Also, M = ||A|| and by the previous slide m = 1 / (||A -1 ||). QED Matrix Condition Number

Properties of the Matrix Condition Number For any matrix A, κ(A) ≥ 1. For the identity matrix, κ(I) = 1. For any permutation matrix P, κ(P) =1. For any matrix A and nonzero scalar c, κ(c A) = κ(A). For any diagonal matrix D = diag(d i ), κ(D) = (max|d i |)/( min | d i | )

What does the condition number tell us? The condition number is a good indicator of how close is a matrix to be singular. The larger the condition number the closer we are to singularity. It is also very useful in assessing the accuracy of solutions to linear systems. In practice we don ’ t really calculate the condition number, it is merely estimated, to perhaps within an order of magnitude.

Condition Number And Accuracy Consider the problem of solving Ax = b. Suppose b has some error, say b + δb. Then, when we solve the equation, we will not get x but instead some value near x, say x + δx. A(x + δx) = b + δb Then, A(x + δx) = b + δb

Condition Number And Accuracy Class Practice: Show:

Condition Number And Accuracy The quantity ||δb||/||b|| is the relative change in the right-hand side, and the quantity ||δx||/||x|| is the relative error caused by this change. This shows that the condition number is a relative error magnification factor. That is, changes in the right-hand side of Ax=b can cause changes κ(A) times as large in the solution.