Download presentation
1
Linear Systems of Equations
2
Direct Methods for Solving Linear Systems of Equations
3
Direct Methods for Solving Linear Systems of Equations
The linear system for unknown are known for Operations to simplify the linear system ( is constant):
4
Direct Methods Example:
The system is now in triangular form and can be solved by a backward substitution process
5
Definitions An nxm (n by m) matrix:
The 1xn matrix (an n-dimensional row vector): The nx1 matrix (an n-dimensional column vector):
6
Definitions then is the augmented matrix.
7
Gaussian Elimination with Backward Substitution
The general Gaussian elimination applied to the linear system First form the augmented matrix with
8
Gaussian Elimination with Backward Substitution
The procedure provided , yields the resulting matrix of the form which is triangular.
9
Gaussian Elimination (cont.)
Since the new linear system is triangular, backward substitution can be performed when
10
About Gaussian Elimination and Cramer’s Rule
Gaussian eliminations requires arithmetic operations. Cramer’s rule requires about arithmetic operations. A simple problem with grid 11 x 11 involves n=81 unknowns, which needs operations. What time is required to solve this problem by Cramer’s rule using 100 megaflops machine? years !
11
More Definitions A diagonal matrix is a square matrix with whenever
The identity matrix of order n, is a diagonal matrix with entries An upper-triangular nxn matrix has the entries: A lower-triangular nxn matrix has the entries:
12
Examples Do you see that ? Do you see that A can be decomposed as: ?
13
Matrix Form for the Linear System
can be viewed as the matrix equation
14
LU decomposition The factorization is particularly useful when it has the form because we can rewrite the matrix equation Solving by forward substitution for y and then for x by backward substitution, we solve the system.
15
Matrix Factorization: LU decomposition
Theorem: If Gaussian elimination can be performed without row interchanges, then the decomposition A=LU is possible, where
16
Crout Factorization for Tridiagonal Systems
18
Tridiagonal Linear System
19
Tridiagonal Linear System: Thomas Algorithm
20
Iterative Methods for Solving Linear Systems of Equations
21
Iterative Methods An iterative technique to solve Ax=b starts with an initial approximation and generates a sequence First we convert the system Ax=b into an equivalent form And generate the sequence of approximation by This procedure is similar to the fixed point method. The stopping criterion:
22
Iterative Methods (Example)
We rewrite the system in the x=Tx+c form
23
Iterative Methods (Example) – cont.
and start iterations with Continuing the iterations, the results are in the Table:
24
The Jacobi Iterative Method
The method of the Example is called the Jacobi iterative method
25
Algorithm: Jacobi Iterative Method
26
The Jacobi Method: x=Tx+c Form
27
The Jacobi Method: x=Tx+c Form (cont)
and the equation Ax=b can be transformed into Finally
28
The Gauss-Seidel Iterative Method
The idea of GS is to compute using most recently calculated values. In our example: Starting iterations with , we obtain
29
The Gauss-Seidel Iterative Method
Gauss-Seidel in form (the Fixed Point) Finally
30
Algorithm: Gauss-Seidel Iterative Method
31
The Successive Over-Relaxation Method (SOR)
The SOR is devised by applying extrapolation to the GS metod. The extrapolation tales the form of a weighted average between the previous iterate and the computed GS iterate successively for each component where denotes a GS iterate and ω is the extrapolation factor. The idea is to choose a value of ω that will accelerate the rate of convergence. under-relaxation over-relaxation
32
SOR: Example Solution: x=(3, 4, -5)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.