Linear Systems of Equations
Direct Methods for Solving Linear Systems of Equations
Direct Methods for Solving Linear Systems of Equations The linear system for unknown are known for Operations to simplify the linear system ( is constant):
Direct Methods Example: The system is now in triangular form and can be solved by a backward substitution process
Definitions An nxm (n by m) matrix: The 1xn matrix (an n-dimensional row vector): The nx1 matrix (an n-dimensional column vector):
Definitions then is the augmented matrix.
Gaussian Elimination with Backward Substitution The general Gaussian elimination applied to the linear system First form the augmented matrix with
Gaussian Elimination with Backward Substitution The procedure provided , yields the resulting matrix of the form which is triangular.
Gaussian Elimination (cont.) Since the new linear system is triangular, backward substitution can be performed when
About Gaussian Elimination and Cramer’s Rule Gaussian eliminations requires arithmetic operations. Cramer’s rule requires about arithmetic operations. A simple problem with grid 11 x 11 involves n=81 unknowns, which needs operations. What time is required to solve this problem by Cramer’s rule using 100 megaflops machine? years !
More Definitions A diagonal matrix is a square matrix with whenever The identity matrix of order n, is a diagonal matrix with entries An upper-triangular nxn matrix has the entries: A lower-triangular nxn matrix has the entries:
Examples Do you see that ? Do you see that A can be decomposed as: ?
Matrix Form for the Linear System can be viewed as the matrix equation
LU decomposition The factorization is particularly useful when it has the form because we can rewrite the matrix equation Solving by forward substitution for y and then for x by backward substitution, we solve the system.
Matrix Factorization: LU decomposition Theorem: If Gaussian elimination can be performed without row interchanges, then the decomposition A=LU is possible, where
Crout Factorization for Tridiagonal Systems
Tridiagonal Linear System
Tridiagonal Linear System: Thomas Algorithm
Iterative Methods for Solving Linear Systems of Equations
Iterative Methods An iterative technique to solve Ax=b starts with an initial approximation and generates a sequence First we convert the system Ax=b into an equivalent form And generate the sequence of approximation by This procedure is similar to the fixed point method. The stopping criterion:
Iterative Methods (Example) We rewrite the system in the x=Tx+c form
Iterative Methods (Example) – cont. and start iterations with Continuing the iterations, the results are in the Table:
The Jacobi Iterative Method The method of the Example is called the Jacobi iterative method
Algorithm: Jacobi Iterative Method
The Jacobi Method: x=Tx+c Form
The Jacobi Method: x=Tx+c Form (cont) and the equation Ax=b can be transformed into Finally
The Gauss-Seidel Iterative Method The idea of GS is to compute using most recently calculated values. In our example: Starting iterations with , we obtain
The Gauss-Seidel Iterative Method Gauss-Seidel in form (the Fixed Point) Finally
Algorithm: Gauss-Seidel Iterative Method
The Successive Over-Relaxation Method (SOR) The SOR is devised by applying extrapolation to the GS metod. The extrapolation tales the form of a weighted average between the previous iterate and the computed GS iterate successively for each component where denotes a GS iterate and ω is the extrapolation factor. The idea is to choose a value of ω that will accelerate the rate of convergence. under-relaxation over-relaxation
SOR: Example Solution: x=(3, 4, -5)