MATRIX METHODS SYSTEMS OF LINEAR EQUATIONS

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Chapter: 3c System of Linear Equations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Numerical Algorithms Matrix multiplication
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Ecuaciones Algebraicas lineales. An equation of the form ax+by+c=0 or equivalently ax+by=- c is called a linear equation in x and y variables. ax+by+cz=d.
Linear Algebraic Equations
NUMERICAL ERROR ENGR 351 Numerical Methods for Engineers
Special Matrices and Gauss-Siedel
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry Trigonometric functions Defined using right triangle  x y h.
Special Matrices and Gauss-Siedel
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Economics 2301 Matrices Lecture 13.
NUMERICAL ERROR ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Algorithm for Gauss elimination 1) first eliminate for each eq. j, j=1 to n-1 for all eq.s k greater than j a) multiply eq. j by a kj /a jj b) subtract.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Systems of Linear Equations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Chapter.
Matrices and Determinants
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
1 Operations with Matrice 2 Properties of Matrix Operations
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Linear Algebraic Equations ~ Gauss Elimination Chapter.
Major: All Engineering Majors Author(s): Autar Kaw
Chapter 8 Objectives Understanding matrix notation.
Chapter 10 Review: Matrix Algebra
Systems and Matrices (Chapter5)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Linear.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
 Row and Reduced Row Echelon  Elementary Matrices.
Rev.S08 MAC 1140 Module 10 System of Equations and Inequalities II.
Presentation by: H. Sarper
ENM 503 Block 2 Lesson 7 – Matrix Methods
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Matrices & Determinants Chapter: 1 Matrices & Determinants.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 11.
MECN 3500 Inter - Bayamon Lecture 7 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
8.1 Matrices & Systems of Equations
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3- Chapter 12 Iterative Methods.
Lesson 3 CSPP58001.
Linear Systems – Iterative methods
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Matrices and Determinants
Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
Matrices, Vectors, Determinants.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
1 Numerical Methods Solution of Systems of Linear Equations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9.
Chapter: 3c System of Linear Equations
5 Systems of Linear Equations and Matrices
Part 3 - Chapter 9.
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
Chapter 11 Chapter 11.
Presentation transcript:

MATRIX METHODS SYSTEMS OF LINEAR EQUATIONS ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier Dr. B.A. DeVantier

Copyright© 2000 by L.R. Chevalier and B.A. DeVantier Permission is granted to students at Southern Illinois University at Carbondale to make one copy of this material for use in the class ENGR 351, Numerical Methods for Engineers. No other permission is granted. All other rights are reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the copyright owner.

System of Linear Equations We have focused our last lectures on finding a value of x that satisfied a single equation f(x) = 0 Now we will deal with the case of determining the values of x1, x2, .....xn, that simultaneously satisfy a set of equations

System of Linear Equations Simultaneous equations f1(x1, x2, .....xn) = 0 f2(x1, x2, .....xn) = 0 ............. f3(x1, x2, .....xn) = 0 Methods will be for linear equations a11x1 + a12x2 +...... a1nxn =c1 a21x1 + a22x2 +...... a2nxn =c2 .......... an1x1 + an2x2 +...... annxn =cn

Mathematical Background Matrix Notation a horizontal set of elements is called a row a vertical set is called a column first subscript refers to the row number second subscript refers to column number

note subscript This matrix has m rows an n column. It has the dimensions m by n (m x n)

Note the consistent scheme with subscripts denoting row,column column 3 row 2

Row vector: m=1 Column vector: n=1 Square matrix: m = n

Upper triangular matrix Lower triangular matrix Banded matrix The diagonal consist of the elements a11 a22 a33 Symmetric matrix Diagonal matrix Identity matrix Upper triangular matrix Lower triangular matrix Banded matrix

Symmetric Matrix aij = aji for all i’s and j’s Does a23 = a32 ? Yes. Check the other elements on your own.

Diagonal Matrix A square matrix where all elements off the main diagonal are zero

Identity Matrix A diagonal matrix where all elements on the main diagonal are equal to 1 The symbol [I] is used to denote the identify matrix.

Upper Triangle Matrix Elements below the main diagonal are zero

Lower Triangular Matrix All elements above the main diagonal are zero

Banded Matrix All elements are zero with the exception of a band centered on the main diagonal

Matrix Operating Rules Addition/subtraction add/subtract corresponding terms aij + bij = cij Addition/subtraction are commutative [A] + [B] = [B] + [A] Addition/subtraction are associative [A] + ([B]+[C]) = ([A] +[B]) + [C]

Matrix Operating Rules Multiplication of a matrix [A] by a scalar g is obtained by multiplying every element of [A] by g

Matrix Operating Rules The product of two matrices is represented as [C] = [A][B] n = column dimensions of [A] n = row dimensions of [B]

Simple way to check whether matrix multiplication is possible exterior dimensions conform to dimension of resulting matrix [A] m x n [B] n x k = [C] m x k interior dimensions must be equal

Matrix multiplication If the dimensions are suitable, matrix multiplication is associative ([A][B])[C] = [A]([B][C]) If the dimensions are suitable, matrix multiplication is distributive ([A] + [B])[C] = [A][C] + [B][C] Multiplication is generally not commutative [A][B] is not equal to [B][A]

Inverse of [A]

Inverse of [A] Transpose of [A]

Determinants Denoted as det A or A for a 2 x 2 matrix

Determinants cont. There are different schemes used to compute the determinant. Consider cofactor expansion - uses minor and cofactors of the matrix Minor: the minor of an entry aij is the determinant of the submatrix obtained by deleting the ith row and the jth column Cofactor: the cofactor of an entry aij of an n x n matrix A is the product of (-1)i+j and the minor of aij

Minor: the minor of an entry aij is the determinant of the submatrix obtained by deleting the ith row and the jth column. Example: the minor of a32 for a 3x3 matrix is: For element a32 the ith row is row 3

Minor: the minor of an entry aij is the determinant of the submatrix obtained by deleting the ith row and the jth column. Example: the minor of a32 for a 3x3 matrix is: For element a32 the jth column is column 2

Minor: the minor of an entry aij is the determinant of the submatrix obtained by deleting the ith row and the jth column. Example: the minor of a32 for a 3x3 matrix is:

Cofactor: Aij, the cofactor of an entry aij of an n x n matrix A is the product of (-1)i+j and the minor of aij i.e. Calculate A31 for a 3x3 matrix First calculate the minor a31

Cofactor: Aij, the cofactor of an entry aij of an n x n matrix A is the product of (-1)i+j and the minor of aij i.e. Calculate A31 for a 3x3 matrix First calculate the minor a31

Cofactor: Aij, the cofactor of an entry aij of an n x n matrix A is the product of (-1)i+j and the minor of aij i.e. Calculate A31 for a 3x3 matrix First calculate the minor a31

Minors and cofactors are used to calculate the determinant of a matrix. Consider an n x n matrix expanded around the ith row (for any one value of i) Consider expanding around the jth column (for any one value of j)

EXAMPLE Calculate the determinant of the following 3x3 matrix. First, calculate it using the 1st row (the way you probably have done it all along). Then try it using the 2nd row.

Properties of Determinants det A = det AT If all entries of any row or column is zero, then det A = 0 If two rows or two columns are identical, then det A = 0

How to represent a system of linear equations as a matrix [A]{X} = {C} where {X} and {C} are both column vectors

Practical application Consider a problem in structural engineering Find the forces and reactions associated with a statically determinant truss 30 90 60 roller: transmits vertical forces hinge: transmits both vertical and horizontal forces at the surface

First label the nodes 1 30 90 60 2 3 FREE BODY DIAGRAM

Determine where you are evaluating tension/compression 1 30 90 60 F1 F3 2 3 F2 FREE BODY DIAGRAM

Label forces at the hinge and roller 1000 kg Label forces at the hinge and roller 1 30 90 60 F1 F3 2 H2 3 F2 V2 V3 FREE BODY DIAGRAM

1000 kg 1 30 90 60 F1 F3 2 H2 3 F2 V2 V3 FREE BODY DIAGRAM

Node 1 F1,V F1,H 30 60 F3 F1

Node 2 F2 F1 30 H2 V2

Node 3 F2 F3 60 V3

SIX EQUATIONS SIX UNKNOWNS

Do some book keeping F1 F2 F3 H2 V2 V3 1 2 3 4 5 6 -cos30 0 cos60 0 0 0 -sin30 0 -sin60 0 0 0 cos30 1 0 1 0 0 sin30 0 0 0 1 0 0 -1 -cos60 0 0 0 0 0 sin60 0 0 1 -1000

This is the basis for your matrices and the equation [A]{x}={c}

Matrix Methods Gauss elimination Matrix inversion Gauss Seidel LU decomposition

Systems of Linear Algebraic Equations Specific Study Objectives Understand the graphic interpretation of ill-conditioned systems and how it relates to the determinant Be familiar with terminology: forward elimination, back substitution, pivot equations and pivot coefficient

Specific Study Objectives Know the fundamental difference between Gauss elimination and the Gauss Jordan method and which is more efficient Apply matrix inversion to evaluate stimulus-response computations in engineering

Specific Study Objectives Understand why the Gauss-Seidel method is particularly well-suited for large sparse systems of equations Know how to assess diagonal dominance of a system of equations and how it relates to whether the system can be solved with the Gauss-Seidel method

Specific Study Objectives Understand the rationale behind relaxation and how to apply this technique Understand that banded and symmetric systems can be decomposed and solved efficiently

Graphical Method 2 equations, 2 unknowns x2 x1 ( x1, x2 )

x2 9 3 ( 4 , 3 ) 2 1 2 1 x1 Check: 3(4) + 2(3) = 12 + 6 = 18

Special Cases No solution Infinite solution Ill-conditioned ( x1, x2 )

a) No solution - same slope f(x) x b) infinite solution f(x) x -1/2 x1 + x2 = 1 -x1 +2x2 = 2 c) ill conditioned so close that the points of intersection are difficult to detect visually f(x) x

If the determinant is zero, the slopes are identical Let’s consider how we know if the system is ill-conditions. Start by considering systems where the slopes are identical If the determinant is zero, the slopes are identical Rearrange these equations so that we have an alternative version in the form of a straight line: i.e. x2 = (slope) x1 + intercept

If the slopes are nearly equal (ill-conditioned) Isn’t this the determinant?

If the determinant is zero the slopes are equal. This can mean: - no solution - infinite number of solutions If the determinant is close to zero, the system is ill conditioned. So it seems that we should use check the determinant of a system before any further calculations are done. Let’s try an example.

Example Determine whether the following matrix is ill-conditioned.

Cramer’s Rule Not efficient for solving large numbers of linear equations Useful for explaining some inherent problems associated with solving linear equations.

Cramer’s Rule to solve for xi - place {b} in the ith column

Cramer’s Rule to solve for xi - place {b} in the ith column

Cramer’s Rule to solve for xi - place {b} in the ith column

EXAMPLE Use of Cramer’s Rule

Elimination of Unknowns ( algebraic approach)

Elimination of Unknowns ( algebraic approach) NOTE: same result as Cramer’s Rule

Gauss Elimination One of the earliest methods developed for solving simultaneous equations Important algorithm in use today Involves combining equations in order to eliminate unknowns

Blind (Naive) Gauss Elimination Technique for larger matrix Same principals of elimination - manipulate equations to eliminate an unknown from an equation - Solve directly then back-substitute into one of the original equations

Two Phases of Gauss Elimination Forward Elimination Note: the prime indicates the number of times the element has changed from the original value.

Two Phases of Gauss Elimination Back substitution

EXAMPLE

Evaluation of Pseudocode for Naïve Elimination DOFOR k =1 to n-1 DOFOR i=k+1 to n factor = a i,k/a k,k DOFOR j=k+1 to n a i,j = a i,j - factor x a k,j ENDDO ci = ci - factor x ck Lets consider the translation of this in the next few overheads using an example 3x3 matrix

First, lets develop a the code to read the elements of the A and C matrices into a FORTRAN program Use arrays

Let’s also keep are convention of a ij Make the array A a double array Dimension C as a single array DIMENSION A(50,50) C(50) Before programming any further, practice reading the array from an ASCII file and printing the resulting array on the screen.

TRY TO DO THIS IN CLASS FIRST DOFOR k =1 to n-1 DOFOR i=k+1 to n factor = a i,k/a k,k DOFOR j=k+1 to n a i,j = a i,j - factor x a k,j ENDDO ci = ci - factor x ck DO 10 K=1,N-1 10 CONTINUE TRY TO DO THIS IN CLASS FIRST

For an n x n matrix DO 10 K=1,N-1 DO 20 I=K+1,N FACTOR = A(I,K) / A(K,K) DO 30 J = K+1,N 30 A(I,J)=A(I,J) - FACTOR*A(K,J) C(I)=C(I) - FACTOR*C(K) 20 CONTINUE 10 CONTINUE

Pitfalls of the Elimination Method Division by zero Round off errors magnitude of the pivot element is small compared to other elements Ill conditioned systems

Division by Zero When we normalize i.e. a12/a11 we need to make sure we are not dividing by zero This may also happen if the coefficient is very close to zero

Techniques for Improving the Solution Use of more significant figures Pivoting Scaling

Use of more significant figures Simplest remedy for ill conditioning Extend precision computational overhead memory overhead

Pivoting Problems occur when the pivot element is zero - division by zero Problems also occur when the pivot element is smaller in magnitude compared to other elements (i.e. round-off errors) Prior to normalizing, determine the largest available coefficient

Pivoting Partial pivoting Complete pivoting rows are switched so that the largest element is the pivot element Complete pivoting columns as well as rows are searched for the largest element and switched rarely used because switching columns changes the order of the x’s adding unjustified complexity to the computer program

Division by Zero - Solution Pivoting has been developed to partially avoid these problems

Scaling Minimizes round-off errors for cases where some of the equations in a system have much larger coefficients than others In engineering practice, this is often due to the widely different units used in the development of the simultaneous equations As long as each equation is consistent, the system will be technically correct and solvable

Scaling value on the diagonal put the greatest Pivot rows to

EXAMPLE Use Gauss Elimination to solve the following set (solution in notes) Use Gauss Elimination to solve the following set of linear equations

SOLUTION First write in matrix form, employing short hand presented in class. We will clearly run into problems of division by zero. Use partial pivoting

Pivot with equation with largest an1

Begin developing upper triangular matrix

...end of problem

GAUSS-JORDAN Variation of Gauss elimination primary motive for introducing this method is that it provides and simple and convenient method for computing the matrix inverse. When an unknown is eliminated, it is eliminated from all other equations, rather than just the subsequent one

GAUSS-JORDAN All rows are normalized by dividing them by their pivot elements Elimination step results in and identity matrix rather than an UT matrix

Graphical depiction of Gauss-Jordan

Matrix Inversion [A] [A] -1 = [A]-1 [A] = I One application of the inverse is to solve several systems differing only by {c} [A]{x} = {c} [A]-1[A] {x} = [A]-1{c} [I]{x}={x}= [A]-1{c} One quick method to compute the inverse is to augment [A] with [I] instead of {c}

Graphical Depiction of the Gauss-Jordan Method with Matrix Inversion Note: the superscript “-1” denotes that the original values have been converted to the matrix inverse, not 1/aij

Stimulus-Response Computations Conservation Laws mass force heat momentum We considered the conservation of force in the earlier example of a truss

Stimulus-Response Computations [A]{x}={c} [interactions]{response}={stimuli} Superposition if a system subject to several different stimuli, the response can be computed individually and the results summed to obtain a total response Proportionality multiplying the stimuli by a quantity results in the response to those stimuli being multiplied by the same quantity These concepts are inherent in the scaling of terms during the inversion of the matrix

Error Analysis and System Condition Scale the matrix of coefficients, [A] so that the largest element in each row is 1. If there are elements of [A]-1 that are several orders of magnitude greater than one, it is likely that the system is ill-conditioned. Multiply the inverse by the original coefficient matrix. If the results are not close to the identity matrix, the system is ill-conditioned. Invert the inverted matrix. If it is not close to the original coefficient matrix, the system is ill-conditioned.

To further study the concepts of ill conditioning, consider the norm and the matrix condition number norm - provides a measure of the size or length of vector and matrices Cond [A] >> 1 suggests that the system is ill-conditioned

LU Decomposition Methods Chapter 10 Elimination methods Gauss elimination Gauss Jordan LU Decomposition Methods

Naive LU Decomposition [A]{x}={c} Suppose this can be rearranged as an upper triangular matrix with 1’s on the diagonal [U]{x}={d} [A]{x}-{c}=0 [U]{x}-{d}=0 Assume that a lower triangular matrix exists that has the property [L]{[U]{x}-{d}}= [A]{x}-{c}

Naive LU Decomposition [L]{[U]{x}-{d}}= [A]{x}-{c} Then from the rules of matrix multiplication [L][U]=[A] [L]{d}={c} [L][U]=[A] is referred to as the LU decomposition of [A]. After it is accomplished, solutions can be obtained very efficiently by a two-step substitution procedure

Consider how Gauss elimination can be formulated as an LU decomposition U is a direct product of forward elimination step if each row is scaled by the diagonal

Although not as apparent, the matrix [L] is also produced during the step. This can be readily illustrated for a three-equation system The first step is to multiply row 1 by the factor Subtracting the result from the second row eliminates a21

Similarly, row 1 is multiplied by The result is subtracted from the third row to eliminate a31 In the final step for a 3 x 3 system is to multiply the modified row by Subtract the results from the third row to eliminate a32

The values f21 , f31, f32 are in fact the elements of an [L] matrix CONSIDER HOW THIS RELATES TO THE LU DECOMPOSITION METHOD TO SOLVE FOR {X}

[A] {x} = {c} [U][L] [L] {d} = {c} {d} [U]{x}={d} {x}

Crout Decomposition Gauss elimination method involves two major steps forward elimination back substitution Efforts in improvement focused on development of improved elimination methods One such method is Crout decomposition

Crout Decomposition Represents and efficient algorithm for decomposing [A] into [L] and [U]

Recall the rules of matrix multiplication. The first step is to multiply the rows of [L] by the first column of [U] Thus the first column of [A] is the first column of [L]

Next we multiply the first row of [L] by the column of [U] to get

Once the first row of [U] is established the operation can be represented concisely

Schematic depicting Crout Decomposition

The Substitution Step [L]{[U]{x}-{d}}= [A]{x}-{c} [L][U]=[A] [L]{d}={c} [U]{x}={d} Recall our earlier graphical depiction of the LU decomposition method

[A] {x} = {c} [U][L] [L] {d} = {c} {d} [U]{x}={d} {x}

Parameters used to quantify the dimensions of a banded system. BW BW = band width HBW = half band width HBW Diagonal

Thomas Algorithm As with conventional LU decomposition methods, the algorithm consist of three steps decomposition forward substitution back substitution Want a scheme to reduce the large inefficient use of storage involved with banded matrices Consider the following tridiagonal system

Note how this tridiagonal system require the storage of a large number of zero values.

Note that we have changed our notation of a’s to e,f,g In addition we have changed our notation of c’s to r

Storage can be accomplished by either storage as vector (e,f,g) or as a compact matrix [B] Storage is even further reduced if the matrix is banded and symmetric. Only elements on the diagonal and in the upper half need be stored.

Gauss Seidel Method An iterative approach Continue until we converge within some pre-specified tolerance of error Round off is no longer an issue, since you control the level of error that is acceptable Fundamentally different from Gauss elimination this is an approximate, iterative method particularly good for large number of equations

Gauss-Seidel Method If the diagonal elements are all nonzero, the first equation can be solved for x1 Solve the second equation for x2, etc. To assure that you understand this, write the equation for x2

Gauss-Seidel Method Start the solution process by guessing values of x A simple way to obtain initial guesses is to assume that they are all zero Calculate new values of xi starting with x1 = c1/a11 Progressively substitute through the equations Repeat until tolerance is reached

EXAMPLE Given the following augmented matrix, complete one iteration of the Gauss Seidel method.

Gauss-Seidel Method convergence criterion as in previous iterative procedures in finding the roots, we consider the present and previous estimates. As with the open methods we studied previously with one point iterations 1. The method can diverge 2. May converge very slowly

Convergence criteria for two linear equations

Convergence criteria for two linear equations Class question: where do these formulas come from?

Convergence criteria for two linear equations cont. Criteria for convergence where presented earlier in class material for nonlinear equations. Noting that x = x1 and y = x2 Substituting the previous equation:

Convergence criteria for two linear equations cont. This is stating that the absolute values of the slopes must be less than unity to ensure convergence. Extended to n equations:

Convergence criteria for two linear equations cont. This condition is sufficient but not necessary; for convergence. When met, the matrix is said to be diagonally dominant.

convergence by graphically illustrating Gauss-Seidel x2 x1 Review the concepts of divergence and convergence by graphically illustrating Gauss-Seidel for two linear equations

Note: we are converging on the solution x2 Note: we are converging on the solution x1

This solution is diverging! x2 Change the order of the equations: i.e. change direction of initial estimates x1 This solution is diverging!

Improvement of Convergence Using Relaxation This is a modification that will enhance slow convergence. After each new value of x is computed, calculate a new value based on a weighted average of the present and previous iteration.

Improvement of Convergence Using Relaxation if l = 1unmodified if 0 < l < 1 underrelaxation nonconvergent systems may converge hasten convergence by dampening out oscillations if 1< l < 2 overrelaxation extra weight is placed on the present value assumption that new value is moving to the correct solution by too slowly

Jacobi Iteration Iterative like Gauss Seidel Gauss-Seidel immediately uses the value of xi in the next equation to predict x i+1 Jacobi calculates all new values of xi’s to calculate a set of new xi values

Graphical depiction of difference between Gauss-Seidel and Jacobi

EXAMPLE Given the following augmented matrix, complete one iteration of the Gauss Seidel method and the Jacobi method. We worked the Gauss Seidel method earlier