1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.

Slides:



Advertisements
Similar presentations
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Advertisements

Chapter: 3c System of Linear Equations
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
MATH 685/ CSI 700/ OR 682 Lecture Notes
Scientific Computing Linear Systems – Gaussian Elimination.
Linear Systems of Equations
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
LU Factorization LU-factorization Matrix factorization Forward substitution Back substitution.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Part 3 Chapter 9 Gauss Elimination
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Lecture 11 - LU Decomposition
Chapter 2, Linear Systems, Mainly LU Decomposition.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
Linear Algebraic Equations
Matrices and Systems of Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Ordinary least squares regression (OLS)
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 10 LU Decomposition and Matrix.
Systems of Linear Equations
Elementary Linear Algebra Howard Anton Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Chapter 1.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Chapter 8 Objectives Understanding matrix notation.
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Scientific Computing Linear Systems – LU Factorization.
Numerical Linear Algebra IKI Course outline Review linear algebra Square linear systems Least Square Problems Eigen Problems Text: Applied Numerical.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Linear.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Using LU Decomposition to Optimize the modconcen.m Routine Matt Tornowske April 1, 2002.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 8 Matrix Inverse and LU Decomposition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
Co. Chapter 3 Determinants Linear Algebra. Ch03_2 Let A be an n  n matrix and c be a nonzero scalar. (a)If then |B| = …….. (b)If then |B| = …..... (c)If.
Lesson 3 CSPP58001.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Systems Dinesh A.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
1 Numerical Methods Solution of Systems of Linear Equations.
College Algebra Chapter 6 Matrices and Determinants and Applications
Part 3 Chapter 9 Gauss Elimination
Chapter: 3c System of Linear Equations
Spring Dr. Jehad Al Dallal
Linear Algebra Lecture 15.
Chapter 10: Solving Linear Systems of Equations
Numerical Analysis Lecture14.
Numerical Computation and Optimization
Numerical Analysis Lecture10.
Linear Systems Numerical Methods.
Lecture 8 Matrix Inverse and LU Decomposition
Chapter 2 A Survey of Simple Methods and Tools
Ax = b Methods for Solution of the System of Equations:
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

1 Chapter 7 Numerical Methods for the Solution of Systems of Equations

2 Introduction This chapter is about the techniques for solving linear and nonlinear systems of equations. Two important problems from linear algebra: – The linear systems problem: – The nonlinear systems problem:

3 7.1 Linear Algebra Review

4

5 Theorem 7.1 and Corollary 7.1 Singular v.s. nonsingular

6 Tridiagonal Matrices Upper triangular: Lower triangular: Symmetric matrices, positive definite matrices…… The concepts of independence/dependence, spanning, basis, vector space/subspace, dimension, and orthogonal/orthonormal should review……

7 7.2 Linear Systems and Gaussian Elimination In Section 2.6, the linear system can be written as a single augmented matrix: Elementary row operations to solve the linear system problems: Row equivalent: if we can manipulate from one matrix to another using only elementary row operations, then the two matrices are said to be row equivalent.

8 Theorem 7.2

9 Example 7.1

10 Example 7.1 (con.)

11 Partial Pivoting

12 The Problem of Naive Gaussian Elimination The problem of naive Gaussian elimination is the potential division by a zero pivot. For example: consider the following system The exact solution: What happens when we solve this system using the naive algorithm and the pivoting algorithm?

13 Discussion Using the naive algorithm: Using the pivoting algorithm: correct incorrect

Operation Counts You can trace Algorithms 7.1 and 7.2 to evaluate the computational time.

The LU Factorization Our goal in this section is to develop a matrix factorization that allows us save the work from the elimination step. Why don’t we just compute A -1 (to check if A is nonsingular)? – The answer is that it is not cost-effective to do so. – The total cost is (Exercise 7) What we will do is show that we can factor the matrix A into the product of a lower triangular and an upper triangular matrix:

16 The LU Factorization

17 Example 7.2

18 Example 7.2 (con.)

19 The Computational Cost The total cost of Gaussian elimination process: If we already have done the factorization, then the cost of the two solution steps: Constructing the LU factorization is surprisingly easy.

20 The LU Factorization : Algorithms 7.5 and 7.6

21

22

23 Example 7.3

24 Example 7.3 (con.) L U

25 Pivoting and the LU Decomposition Can we pivoting in the LU decomposition without destroying the algorithm? – Because of the triangular structure of the LU factors, we can implement pivoting almost exactly as we did before. – The difference is that we must keep track of how the rows are interchanged in order to properly apply the forward and backward solution steps.

26 Example 7.4 Next page

27 Example 7.4 (con.) We need to keep track of the row interchanges.

28 Discussion How to deep track of the row interchanges? – Using an index array – For example: In Example 7.4, the final version of J is you can check that this is correct.

Perturbation, Conditioning, and Stability Example 7.5

Vector and Matrix Norms For example: – Infinity norm: – Euclidean 2-norm:

31 Matrix Norm The properties of matrix norm: (1) (2) For example: – The matrix infinity norm: – The matrix 2-norm:

32 Example 7.6

The Condition Number and Perturbations Note that Condition number

34 Definition 7.3 and Theorem 7.3

35 AA -1 = I

36 Theorem 7.4

37 Theorems 7.5 and 7.6

38 Theorem 7.7

39 Definition 7.4 An example: Example 7.7

40

41 Theorem 7.9

42 Discussion Is Gaussian elimination with partial pivoting a stable process? – For a sufficiently accurate computer (u small enough) and a sufficiently small problem (n small enough), then Gaussian elimination with partial pivoting will produce solutions that are stable and accurate.

Estimating the Condition Number Singular matrices are perhaps something of a rarity, and all singular matrices are arbitrarily close to a nonsingular matrix. If the solution to a linear system changes a great deal when the problem changes only very slightly, then we suspect that the matrix is ill conditioned (nearly singular). The condition number is an important indicator to find the ill conditioned matrix.

44 Estimating the Condition Number Estimate the condition number

45 Example 7.8

Iterative Refinement Since Gaussian elimination can be adversely affected by rounding error, especially if the matrix is ill condition. Iterative refinement (iterative improvement) algorithm can use to improve the accuracy of a computed solution.

47 Theorem 7.11 and Algorithm 7.10

48 Example 7.9

49 Example 7.9 (con.) compare

SPD Matrices and The Cholesky Decomposition SPD matrices: symmetric, positive definite matrices You can prove this theorem using induction method.

51 The Cholesky Decomposition There are a number of different ways of actually constructing the Cholesky decomposition. All of these constructions are equivalent, because the Cholesky factorization is unique. One common scheme uses the following formulas: This is a very efficient algorithm. You can read Section 9.22 to learn more about Cholesky method. n

Iterative Method for Linear Systems: a Brief Survey If the coefficient matrix is a very large and sparse, then Gaussian elimination may not be the best way to solve the linear system problem. Why? Even though A=LU is sparse, the individual factors L and U may not be as sparse as A.

53 Example 7.10

54 Example 7.10 (con.)

55 Splitting Methods (details see Chapter 9)

56 Theorem 7.13

57 Definition 7.6

58 Theorem 7.14 Conclusion:

59 Example of Splitting Methods-- Jacobi Iteration Jacobi iteration: In this method, matrix M = D.

60 Example 7.12

61 Example 7.12 (con.)

62 Example of Splitting Methods-- Gauss-Seidel Iteration Gauss-Seidel Iteration : In this method, matrix M = L.

63 Example 7.13

64 Theorem 7.15

65 Example of Splitting Methods-- SOR Iteration SOR: successive over-relaxation iteration

66 Example 7.14

67 Theorem 7.16