1 Systems of Linear Equations Iterative Methods. 2 B. Iterative Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Systems of Equations
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Numerical Algorithms Matrix multiplication
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Special Matrices and Gauss-Siedel
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
1 Systems of Linear Equations Iterative Methods. 2 B. Direct Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 20 Solution of Linear System of Equations - Iterative Methods.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 19 Solution of Linear System of Equations - Iterative Methods.
Special Matrices and Gauss-Siedel
Systems of Non-Linear Equations
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Partial differential equations Function depends on two or more independent variables This is a very simple one - there are many more complicated ones.
Iterative Methods for Solving Linear Systems of Equations ( part of the course given for the 2 nd grade at BGU, ME )
Systems of Linear Equations
1 Systems of Linear Equations Gauss-Jordan Elimination and LU Decomposition.
Roots of Equations Open Methods Second Term 05/06.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Numerical Methods Due to the increasing complexities encountered in the development of modern technology, analytical solutions usually are not available.
Assignment Solving System of Linear Equations Using MPI Phạm Trần Vũ.
Computer Engineering Majors Authors: Autar Kaw
ITERATIVE TECHNIQUES FOR SOLVING NON-LINEAR SYSTEMS (AND LINEAR SYSTEMS)
9/7/ Gauss-Siedel Method Chemical Engineering Majors Authors: Autar Kaw Transforming.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Iterative Methods for Solving Linear Systems Leo Magallon & Morgan Ulloa.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Scientific Computing Partial Differential Equations Poisson Equation.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 11.
10/17/ Gauss-Siedel Method Industrial Engineering Majors Authors: Autar Kaw
Linear Systems Iterative Solutions CSE 541 Roger Crawfis.
10/26/ Gauss-Siedel Method Civil Engineering Majors Authors: Autar Kaw Transforming.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Elliptic PDEs and the Finite Difference Method
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3- Chapter 12 Iterative Methods.
Linear Systems – Iterative methods
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
Introduction to Scientific Computing II Overview Michael Bader.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Part 3 Chapter 12 Iterative Methods
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
2/26/ Gauss-Siedel Method Electrical Engineering Majors Authors: Autar Kaw
Linear Systems Numerical Methods. 2 Jacobi Iterative Method Choose an initial guess (i.e. all zeros) and Iterate until the equality is satisfied. No guarantee.
Special Matrices Banded matrices Solutions to problems that depend on their neighbours eg 1D T i = f(T i-1,T i+1 ) 2D T i,j = f(T i-1,j,T i+1,j,T i,j -1,T.
3/6/ Gauss-Siedel Method Major: All Engineering Majors Author: دکتر ابوالفضل رنجبر نوعی
CSCI-455/552 Introduction to High Performance Computing Lecture 15.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Iterative Solution Methods
Chapter: 3c System of Linear Equations
Part 3 Chapter 12 Iterative Methods
Solving Systems of Linear Equations: Iterative Methods
بسم الله الرحمن الرحيم.
Solving Linear Systems Ax=b
Gauss-Siedel Method.
Numerical Analysis Lecture12.
Iterative Methods Good for sparse matrices Jacobi Iteration
CSE 245: Computer Aided Circuit Simulation and Verification
Autar Kaw Benjamin Rigsby
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
CSE 245: Computer Aided Circuit Simulation and Verification
Matrix Methods Summary
Numerical Analysis Lecture14.
Numerical Analysis Lecture13.
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations:
Pivoting, Perturbation Analysis, Scaling and Equilibration
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

1 Systems of Linear Equations Iterative Methods

2 B. Iterative Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods

3 Iterative methods are more appropriate when –the number of equations involved is large (typically of the order of 100 or more) and/or –the matrix is sparse (less memory requirements) Iterative Methods

4 Jacobi Iteration Step 2: Starts with initial guesses x 1 (0), x 2 (0), x 3 (0) Step 3: Iteratively compute x 1 (i+1), x 2 (i+1), x 3 (i+1) based on the updating formula for i = 0, 1, …, until the x 's converge Step 1: Construct updating formula as

5 Gauss-Seidel Iteration Improved version of Jacobi Iteration Observation: If x 's converge, then after computing x 1 (i+1) should be a better approximation of x 1 than x 1 (i). Thus the updating formula of Gauss-Seidel Iteration becomes

6 Stopping Criteria When the estimated percentage relative error of every x 's is less then the acceptable error.

7 Example: Gauss-Seidel Method Solve Use Gauss-Seidel Method and obtain the following updating equation.

8 What would happen if we swap the rows? x1x1 x2x2 x3x |ε t | of x 1 = 0.31% |ε t | of x 2 = 0.015% |ε t | of x 3 = % Example: Gauss-Seidel Method

9 Convergence A simple way to determine the convergence is to inspect the diagonal elements. All of the diagonal elements must be non-zero. Convergence is guaranteed if the system is diagonally dominant. What if the matrix is not diagonally dominant?

10 Convergence Good initial guess can greatly improve the chance of convergence. Convergent rate depends on the properties of the matrix. The higher the condition number, the slower the convergent rate.

11 Exercise How would you solve the following system of equations using the Gauss-Seidel method with guarantee of convergence?

12 Solution To guarantee convergence, the matrix on the L.H.S. of the equations must be diagonal dominant. For the given system of equations, we can reorder the equations so that the system becomes and construct the updating formula as

13 Successive Over-Relaxation (SOR) ** Successive Over-Relaxation is a modification of the Gauss-Seidel method to enhance convergence. where λis a weighting factor between 0 and 2. x i is the value computed from the original Gauss-Seidel updating equation (i.e. before applying SOR). x i old is the value of x i in the previous iteration x i new is the new value of x i, used for updating.

14 Relaxation ** λ=1, x i new = x i underrelaxation, 0 ≤ λ < 1 to make a non-convergent system converge, or to speedup convergence by avoiding oscillations. overrelaxation, 1 < λ ≤ 2 to accelerate the convergence, if x i is moving toward the solution at a slow rate. More weighing is given to the current value of x.

15 How to determine λ? ** Problem specific Usually determined empirically Not useful when the system is only solved once If the system are solved many times, a carefully selected value of λ can greatly improve the rate of convergence.

16 Pseudocode for Gauss-Seidel Iteration // Assume arrays start with index 1 instead of 0. // a: Matrix A // b: Vector b // n: Dimension of the system of equations // x: Vector x; contains initial values and // will be used to store the solution // imax: maximum number of iterations allowed // es: Acceptable percentage relative error // lambda: Use to "relax" x's Gauss_Seidel(a, b, n, x, imax, es, lambda) { // Normalize elements in every row by // dividing them by the diagonal // element. Doing so can save some // division operations later. for i = 1 to n { dummy = a[i, i] for j = 1 to n a[i,j] = a[i,j] / dummy b[i] = b[i] / dummy }

17 Pseudocode for Gauss-Seidel Iteration // Assuming the initial guess (the values in x) // is not reliable. // So carry out one iteration of Gauss-Seidel // without relaxation (to get a better approximation // of x's) for i = 1 to n { sum = b[i] for j = 1 to n { if (i != j) sum = sum - a[i,j] * x[j] } x[i] = sum } iter = 1;

18 do { sentinel = 1 // Assuming iteration converges // set to 0 if error is still big for i = 1 to n { old = x[i] // Save the previous x // Compute new x and apply relaxation sum = b[i] for j = 1 to n if (i != j) sum = sum – a[i,j] * x[j] x[i] = lambda * sum + (1 – lambda) * old // If "necessary", check if error is too big if (sentinel == 1 AND x[i] != 0) { ea = abs((x[i] – old) / x[i]) * 100 if (ea > es) sentinel = 0 } iter = iter + 1; } while (sentinel == 0 AND iter < imax) }

19 Summary Iterative vs. Direct methods (Adv. / Disadv. ) Jacobi and Gauss-Seidel Iteration Methods Iterative methods –Efficient, especially when an approximate solution is known –May not converge –Stopping criteria? –Converging criteria?