Computational Methods 2018/2019 Fall Chapter 7-B

Slides:



Advertisements
Similar presentations
Chapter 4 Euclidean Vector Spaces
Advertisements

Numerical Solution of Linear Equations
Chapter: 3c System of Linear Equations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebraic Equations
Special Matrices and Gauss-Siedel
Ch 7.9: Nonhomogeneous Linear Systems
1 Systems of Linear Equations Iterative Methods. 2 B. Iterative Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
1 Systems of Linear Equations Iterative Methods. 2 B. Direct Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Special Matrices and Gauss-Siedel
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Systems of Linear Equations
1 Systems of Linear Equations Error Analysis and System Condition.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Chapter 8 Objectives Understanding matrix notation.
Systems of Linear Equations Iterative Methods
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Systems of Linear Equation and Matrices
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
Linear Systems – Iterative methods
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Chapter 9 Gauss Elimination The Islamic University of Gaza
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Linear Systems Dinesh A.
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
Matrices, Vectors, Determinants.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
ALGEBRAIC EIGEN VALUE PROBLEMS
1 Numerical Methods Solution of Systems of Linear Equations.
Chapter 6 Eigenvalues and Eigenvectors
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
7.3 Linear Systems of Equations. Gauss Elimination
Iterative Solution Methods
Chapter: 3c System of Linear Equations
MAT 322: LINEAR ALGEBRA.
7.7 Determinants. Cramer’s Rule
5 Systems of Linear Equations and Matrices
Solving Systems of Linear Equations: Iterative Methods
Gauss-Siedel Method.
Numerical Analysis Lecture12.
Systems of First Order Linear Equations
Chapter 1 Systems of Linear Equations and Matrices
CSE 245: Computer Aided Circuit Simulation and Verification
Autar Kaw Benjamin Rigsby
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
CSE 245: Computer Aided Circuit Simulation and Verification
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Numerical Analysis Lecture14.
Numerical Analysis Lecture13.
Determinant of amatrix:
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations:
Pivoting, Perturbation Analysis, Scaling and Equilibration
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Computational Methods 2018/2019 Fall Chapter 7-B CSE 551 Computational Methods 2018/2019 Fall Chapter 7-B Iterative Solutions of Linear Systems

Outline Vector and Matrix Norms Condition Number and Ill-Conditioning Basic Iterative Methods Pseudocode Convergence Theorems Matrix Formulation Another View of Overrelaxation Conjugate Gradient Method

References W. Cheney, D Kincaid, Numerical Mathematics and Computing, 6ed, Chapter 8

Iterative Solutions of Linear Systems a completely different strategy for solving a nonsingular linear system used solving partial differential equations numerically. systems having hundreds of thousands of equations arise routinely

Vector and Matrix Norms useful in the discussion of errors and in the stopping criteria for iterative methods defined on any vector space, Rn or Cn A vector norm ||x|| - length or magnitude of a vector x  Rn any mapping from Rn to R properties: for vectors x, y  Rn and scalars α  R

Examples of vector norms for vector x (x1, x2, . . . , xn)T  Rn:

n × n matrices, - matrix norms, subject to the same requirements: for matrices A, B and scalars α.

matrix norms that are related to a vector norm. For a vector norm || · ||, the subordinate matrix norm is defined by A n × n matrix subordinate matrix norm - additional properties:

two meanings - with the notation || · ||p vectors, matrices The context will determine which one is intended. Examples of subordinate matrix norms for an n × n matrix A: σi: eigenvalues of AT A - singular values of A largest σmax in absolute value spectral radius of A

Condition Number and Ill-Conditioning important quantity influence in the numerical solution of a linear system Ax = b - the condition number, defined: not necessary to compute the inverse of A obtain an estimate of the condition number

the condition number κ(A) gauges the transfer of error from the matrix A and the vector b to the solution x The rule of thumb: if κ(A) = 10k , expect to lose at least k digits of precision in solving the system Ax = b If the linear system is sensitive to perturbations in elements of A, components of b reflected in A having a large condition number In such a case, the matrix A is said to be ill-conditioned. the larger the condition number, the more ill-conditioned the system.

there may have been perturbations of the data to solve an invertible linear system of equations Ax = b for a given coefficient matrix A and right-hand side b there may have been perturbations of the data owing to uncertainty in the measurements and roundoff errors in the calculations. Suppose that right-hand side is perturbed by an amount δb corresponding solution is perturbed an amount δx.

From the original linear system Ax = b and norms, From the perturbed linear system Aδx = δb, δx = A−1δb

Combining the two inequalities: contains the condition number of the original matrix A.

example of an ill-conditioned matrix - the Hilbert matrix: condition number: 524.0568 determinant: 4.6296×10−4 In solving linear systems, the condition number of the coefficient matrix measures the sensitivity of the system to errors in the data

When the condition number large the computed solution of the system may be dangerously in error! Further checks should be made before accepting the solution as being accurate Values of the condition number near 1 indicate a well-conditioned matrix whereas large values indicate an ill-conditioned matrix. Using the determinant to check for singularity is appropriate only for matrices of modest size. Using mathematical software, compute the condition number to check for singular or near-singular matrices.

A goal in the study of numerical methods is to acquire an awareness of whether a numerical result can be trusted or whether it may be suspect (and therefore in need of further analysis). condition number - some evidence regarding this question. In fact, some solution procedures involve advanced features that depend on an estimated condition number and may switch solution techniques based on it.

For example, this criterion may result in a switch of the solution technique from a variant of Gaussian elimination to a least-squares solution for an illconditioned system. Unsuspecting users may not realize that this has happened unless they look at all of the results, including the estimate of the condition number. (Condition numbers can also be associated with other numerical problems, such as locating roots of equations.)

Basic Iterative Methods produces sequence of approximate solution vectors x(0),x(1), x(2), . . . for system Ax = b designed - the sequence converges to the actual solution. stopped - sufficient precision attained contrast to Gaussian elimination algorithm, no provision for stopping midway and offering up an approximate solution

general iterative algorithm for solving System (1) : Select a nonsingular matrix Q and having chosen an arbitrary starting vector x(0) generate vectors x(1), x(2), . . . recursively: suppose that the sequence x(k) does converge, to a vector x*, taking the limit as k →∞in System (2):

leads to Ax* = b if the sequence converges, its limit - solution to System (1) e.g., Richardson iteration uses Q = I.

pseudocode

In choosing - nonsingular matrix Q : • System (2) - easy to solve for x(k), when the right-hand side is known. • Matrix Q should be chosen to ensure that the sequence x(k) converges, no matter what initial vector is used. Ideally, this convergence will be rapid. not necessary to compute the inverse of Q solve a linear system - Q: coefficient matrix. select Q - easy to solve e.g., diagonal, tridiagonal, banded, lower triangular, and upper triangular.

System (1) in detailed form: Solving the ith equation for the ith unknown term, Jacobi method: assume that all diagonal elements are nonzero If not rearrange the equations

In the Jacobi method,the equations are solved in order xj(k−1) and Gauss-Seidel method: new values xj(k−1) can be used immediately in their place.

If x(k−1) not saved, dispense with the superscripts

acceleration of the Gauss-Seidel method relaxation factor ω - successive overrelaxation (SOR) method: SOR method with ω = 1 reduces to the Gauss-Seidel method.

Example (Jacobi iteration) Let Carry out a number of iterations of the Jacobi iteration, starting with the zero initial vector.

Example Rewriting the equations, Jacobi method: initial vector x(0) = [0, 0, 0]T The actual solution (to four decimal places rounded) obtained

In the Jacobi iteration, Q - diagonal of A:

Jacobi iterative matrix and constant vector: Q close to A, Q−1A close to I, I − Q−1A small. the

Example (Gauss-Seidel iteration) Repeat the preceding example using the Gauss-Seidel iteration. Solution The idea of the Gauss-Seidel iteration: accelerate the convergence - incorporating each vector as soon as it has been computed more efficient in the Jacobi method to use the updated value x1(k) in the second equation instead of the old value x1(k-1) Similarly, x2(k) could be used in the third equation in place of x2(k-1)

Using the new iterates as soon as they become available, Gauss-Seidel method: Starting with the initial vector zero, some of the iterates:

In this example, the convergence of the Gauss-Seidel method is approximately twice as fast as that of the Jacobi method In Gauss-Seidel, Q – lower triangular part of A, including the diagonal. Using the data from the previous example:

in a practical problem not compute Q−1. Gauss- Seidel iterative matrix and constant vector Gauss-Seidel method:

Example (SOR iteration) Repeat the preceding example using the SOR iteration with ω = 1.1. Starting with the initial vector – zeros, with ω = 1.1, some of the iterates:

the convergence of the SOR method is faster than that of the Gauss-Seidel method SOR - Q - lower triangular part of A including the diagonal, but each diagonal element ai j replaced by ai j/ω, ω relaxation factor.

SOR iterative matrix and constant vector: write the SOR method:

Pseudocode

the vector y contains the old iterate values, and the vector x contains the updated ones The values of kmax, δ, and ε are set either in a parameter statement or as global variables.

The pseudocode for the procedure Gauss Seidel(A, b, x) would be the same as that for the Jacobi pseudocode above except that the innermost j-loop would be replaced by the following:

The pseudocode for procedure SOR(A, b, x, ω)would be the same as that for the Gauss- Seidel pseudocode with the statement following the j-loop replaced by: xi ← sum/diag xi ← ωxi + (1 − ω)yi In the solution of partial differ

Convergence Theorems For the analysis of the method described by System (2): the iteration matrix and vector:

in the pseudocode, not compute Q−1 et x be the solution of System (1) to facilitate the analysis et x be the solution of System (1) Since A nonsingular, x exists and is unique from Equation (7), e(k) ≡ x(k) − x current error vector

e(k) to become smaller as k increases Equation (8) - e(k) will be smaller than e(k-1) if I − Q−1A is small, in some sense Q−1A close to I. Q should be close to A. (Norms can be used to make small and close precise.)

THEOREM 1 SPECTRAL RADIUS THEOREM In order that the sequence generated by Qx(k) = (Q − A)x(k-1) + b to converge, no matter what starting point x(0) is selected it is necessary and sufficient that all eigenvalues of I − Q−1A lie in the open unit disc, |z| < 1, in the complex plane.

The conclusion of this theorem can also be written as where ρ is the spectral radius function: For any n × n matrix G, having eigenvalues λi, ρ(G) = max1in |λi |.

Example Determine whether the Jacobi, Gauss-Seidel, and SOR methods (with ω = 1.1) of the previous examples converge for all initial iterates. Solution

the Jacobi method, compute the eigenvalues of the relevant matrix B. The steps are The eigenvalues are λ = 0,±sqrt(1/3) ≈ ±0.5774 by the preceding theorem: Jacobi iteration succeeds for any starting vector in this example.

Gauss-Seidel method, the eigenvalues of the iteration matrix L detmined from The eigenvalues are λ = 0, 0, 1/3 ≈ 0.333 Hence, the Gauss-Seidel iteration will also succeed for any initial vector in this example.

SOR method with ω = 1.1, the eigenvalues of the iteration matrix Lω determined from The eigenvalues are λ ≈ 0.1200, 0.0833,−0.1000. SOR iteration will also succeed for any initial vector in this example

A condition - easier to verify than the inequality ρ(I − Q−1A) < 1: the dominance of the diagonal elements over the other elements in the same row use the property of diagonal dominance to determine whether the Jacobi and Gauss-Seidel methods converge

THEOREM 2 JACOBI AND GAUSS-SEIDEL CONVERGENCE THEOREM If A is diagonally dominant, then the Jacobi and Gauss-Seidel methods converge for any starting vector x(0). Notice that this is a sufficient but not a necessary condition there are matrices that are not diagonally dominant for which these methods converge.

DEFINITION 1 SYMMETRIC POSITIVE DEFINITE Matrix A is symmetric positive definite (SPD) if A = AT and xT Ax > 0 for all nonzero real vectors x. For a matrix A to be SPD, it is necessary and sufficient that A = AT and that all eigenvalues of A are positive.

THEOREM 3 SOR CONVERGENCE THEOREM Suppose that the matrix A has positive diagonal elements and that 0 < ω < 2. The SOR method converges for any starting vector x(0) if and only if A is symmetric and positive definite.

For the formal theory of iterative methods Matrix Formulation For the formal theory of iterative methods split the matrix A into the sum of nonzero diagonal matrix D strictly lower triangular matrix CL strictly upper triangular matrix CU such that D = diag(A), CL = (−ai j )i>j , and CU = (−ai j )i<j .

linear System (3) can be written as From Equation (4), the Jacobi method in matrix-vector form is corresponds to Equation (2) with Q = diag(A) = D.

From Equation (5), the Gauss-Seidel method becomes corresponds to Equation (2) with Q = diag(A) + lower triangular(A) = D − CL .

From Equation (6), the SOR method can be written as corresponds to Equation (2) with Q =(1/ω)diag(A) + lower triangular(A) = (1/ω)D − CL

In summary, the iteration matrix and constant vector for the basic three iterative methods (Jacobi, Gauss-Seidel, and SOR) can be written in terms of this splitting For the Jacobi method, Q = D, For the Gauss-Seidel method, Q = D − CL,

For the SOR method, Q = 1/ω(D − ωCL ),

Another View of Overrelaxation In some cases, the rate of convergence of the basic iterative scheme (2) can be improved by the introduction of an auxiliary vector and an acceleration parameter ω as follows: The parameter ω gives a weighting in favor of the updated values

When ω = 1, this procedure reduces to the basic iterative method, and when 1 < ω < 2, the rate of convergence may be improved, called overrelaxation When Q = D, - Jacobi overrelaxation (JOR) method:

Overrelaxation has particular advantages when used with the Gauss-Seidel method in a slightly different way: SOR method:

Conjugate Gradient Method