Chapter 9 Gauss Elimination The Islamic University of Gaza

Slides:



Advertisements
Similar presentations
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Advertisements

Chapter: 3c System of Linear Equations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
1.5 Elementary Matrices and a Method for Finding
Simultaneous Linear Equations
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Numerical Algorithms Matrix multiplication
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Part 3 Chapter 9 Gauss Elimination
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Major: All Engineering Majors Author(s): Autar Kaw
Ecuaciones Algebraicas lineales. An equation of the form ax+by+c=0 or equivalently ax+by=- c is called a linear equation in x and y variables. ax+by+cz=d.
Linear Algebraic Equations
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 12 System of Linear Equations.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 10 LU Decomposition and Matrix.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 14 Elimination Methods.
Special Matrices and Gauss-Siedel
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 15 Solution of Systems of Equations.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Algorithm for Gauss elimination 1) first eliminate for each eq. j, j=1 to n-1 for all eq.s k greater than j a) multiply eq. j by a kj /a jj b) subtract.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 10 LU Decomposition and Matrix.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 18 LU Decomposition and Matrix Inversion.
Systems of Linear Equations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Chapter.
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Linear Algebraic Equations ~ Gauss Elimination Chapter.
Major: All Engineering Majors Author(s): Autar Kaw
Precalculus – MAT 129 Instructor: Rachel Graham Location: BETTS Rm. 107 Time: 8 – 11:20 a.m. MWF.
Chapter 1 – Linear Equations
MATRICES AND DETERMINANTS
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Linear.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Chapter 7 Notes Honors Pre-Calculus. 7.1/7.2 Solving Systems Methods to solve: EXAMPLES: Possible intersections: 1 point, 2 points, none Elimination,
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
MECN 3500 Inter - Bayamon Lecture 7 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Chemical Engineering Majors Author(s): Autar Kaw
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
Gaussian Elimination Electrical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Gaussian Elimination Industrial Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Gaussian Elimination Mechanical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
Gaussian Elimination Civil Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for STEM.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Numerical Methods. Introduction Prof. S. M. Lutful Kabir, BRAC University2  One of the most popular techniques for solving simultaneous linear equations.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
1 Numerical Methods Solution of Systems of Linear Equations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9.
Pitfalls of Gauss Elimination ● Division by zero (avoid by pivoting) ● Round-off errors – Important with >100 eqns – Substitute answer into original eqn.
Part 3 Chapter 9 Gauss Elimination
Simultaneous Linear Equations
Spring Dr. Jehad Al Dallal
Gaussian Elimination.
Part 3 - Chapter 9.
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
Major: All Engineering Majors Author(s): Autar Kaw
Linear Systems Numerical Methods.
Presentation transcript:

Chapter 9 Gauss Elimination The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 9 Gauss Elimination

Introduction Roots of a single equation: A general set of equations: - n equations, - n unknowns.

Linear Algebraic Equations Nonlinear Equations

Review of Matrices Row vector: Column vector: Square matrix: 2nd row Elements are indicated by a i j row column mth column Row vector: Column vector: Square matrix: - [A]nxm is a square matrix if n=m. - A system of n equations with n unknonws has a square coefficient matrix.

Review of Matrices Main (principle) diagonal: [A]nxn consists of elements aii ; i=1,...,n Symmetric matrix: If aij = aji  [A]nxn is a symmetric matrix Diagonal matrix: [A]nxn is diagonal if aij = 0 for all i=1,...,n ; j=1,...,n and ij Identity matrix: [A]nxn is an identity matrix if it is diagonal with aii=1 i=1,...,n . Shown as [I]

Review of Matrices Upper triangular matrix: [A]nxn is upper triangular if aij=0 i=1,...,n ; j=1,...,n and i>j Lower triangular matrix: [A]nxn is lower triangular if aij=0 i=1,...,n ; j=1,...,n and i<j Inverse of a matrix: [A]-1 is the inverse of [A]nxn if [A]-1[A] = [I] Transpose of a matrix: [B] is the transpose of [A]nxn if bij=aji Shown as [A] or [A]T

Special Types of Square Matrices Symmetric Diagonal Identity Upper Triangular Lower Triangular

Review of Matrices Matrix multiplication: Note: [A][B]  [B][A]

Review of Matrices For example augmented with the column vector is Augmented matrix: is a special way of showing two matrices together. For example augmented with the column vector is Determinant of a matrix: A single number. Determinant of [A] is shown as |A|.

Part 3- Objectives

Solving Small Numbers of Equations There are many ways to solve a system of linear equations: Graphical method Cramer’s rule Method of elimination Numerical methods for solving larger number of linear equations: - Gauss elimination (Chp.9) - LU decompositions and matrix inversion(Chp.10) For n ≤ 3

1. Graphical method For two equations (n = 2): Solve both equations for x2: the intersection of the lines presents the solution. For n = 3, each equation will be a plane on a 3D coordinate system. Solution is the point where these planes intersect. For n > 3, graphical solution is not practical.

Graphical Method -Example Solve: Plot x2 vs. x1, the intersection of the lines presents the solution.

Graphical Method No solution Infinite solution ill condition (sensitive to round-off errors)

2.Determinants and Cramer’s Rule Determinant can be illustrated for a set of three equations: Where [A] is the coefficient matrix:

Graphical Method Mathematically Coefficient matrices of (a) & (b) are singular. There is no unique solution for these systems. Determinants of the coefficient matrices are zero and these matrices can not be inverted. Coefficient matrix of (c) is almost singular. Its inverse is difficult to take. This system has a unique solution, which is not easy to determine numerically because of its extreme sensitivity to round-off errors.

Cramer’s Rule

Cramer’s Rule For a singular system D = 0  Solution can not be obtained. For large systems Cramer’s rule is not practical because calculating determinants is costly.

3. Method of Elimination The basic strategy is to successively solve one of the equations of the set for one of the unknowns and to eliminate that variable from the remaining equations by substitution. The elimination of unknowns can be extended to systems with more than two or three equations. However, the method becomes extremely tedious to solve by hand.

Elimination of Unknowns Method 2.5x1 + 6.2x2 = 3.0 4.8x1 - 8.6x2 = 5.5 Given a 2x2 set of equations: Multiply the 1st eqn by 8.6 and the 2nd eqn by 6.2  21.50x1 + 53.32x2=25.8 29.76x1 – 53.32x2=34.1 Add these equations  51.26 x1 + 0 x2 = 59.9 Solve for x1 : x1 = 59.9/51.26 = 1.168552478 Using the 1st eqn solve for x2 : x2 =(3.0–2.5*1.168552478)/6.2 = 0.01268045242 Check if these satisfy the 2nd eqn: 4.8*1.168552478–8.6*0.01268045242 = 5.500000004 (Difference is due to the round-off errors).

Naive Gauss Elimination Method It is a formalized way of the previous elimination technique to large sets of equations by developing a systematic scheme or algorithm to eliminate unknowns and to back substitute. As in the case of the solution of two equations, the technique for n equations consists of two phases: 1. Forward elimination of unknowns. 2. Back substitution.

Naive Gauss Elimination Method Consider the following system of n equations. a11x1 + a12x2 + ... + a1nxn = b1 (1) a21x1 + a22x2 + ... + a2nxn = b2 (2) ... an1x1 + an2x2 + ... + annxn = bn (n) Form the augmented matrix of [A|B]. Step 1 : Forward Elimination: Reduce the system to an upper triangular system. 1.1- First eliminate x1 from 2nd to nth equations. - Multiply the 1st eqn. by a21/a11 & subtract it from the 2nd equation. This is the new 2nd eqn. - Multiply the 1st eqn. by a31/a11 & subtract it from the 3rd equation. This is the new 3rd eqn. - Multiply the 1st eqn. by an1/a11 & subtract it from the nth equation. This is the new nth eqn.

Naive Gauss Elimination Method (cont’d) Note: In these steps the 1st eqn is the pivot equation and a11 is the pivot element. Note that a division by zero may occur if the pivot element is zero. Naive-Gauss Elimination does not check for this. The modified system is  indicates that the system is modified once.

Naive Gauss Elimination Method (cont’d) 1.2- Now eliminate x2 from 3rd to nth equations. The modified system is  Repeat steps (1.1) and (1.2) upto (1.n-1). we will get this upper triangular system 

Naive Gauss Elimination Method (cont’d) Step 2 : Back substitution Find the unknowns starting from the last equation. 1. Last equation involves only xn. Solve for it. 2. Use this xn in the (n-1)th equation and solve for xn-1. ... 3. Use all previously calculated x values in the 1st eqn and solve for x1.

Summary of Naive Gauss Elimination Method

Pseudo-code of Naive Gauss Elimination Method (a) Forward Elimination (b) Back substitution k,j

Naive Gauss Elimination Method Example 1 Solve the following system using Naive Gauss Elimination. 6x1 – 2x2 + 2x3 + 4x4 = 16 12x1 – 8x2 + 6x3 + 10x4 = 26 3x1 – 13x2 + 9x3 + 3x4 = -19 -6x1 + 4x2 + x3 - 18x4 = -34 Step 0: Form the augmented matrix 6 –2 2 4 | 16 12 –8 6 10 | 26 R2-2R1 3 –13 9 3 | -19 R3-0.5R1 -6 4 1 -18 | -34 R4-(-R1)

Naive Gauss Elimination Method Example 1 (cont’d) Step 1: Forward elimination 1. Eliminate x1 6 –2 2 4 | 16 (Does not change. Pivot is 6) 0 –4 2 2 | -6 0 –12 8 1 | -27 R3-3R2 0 2 3 -14 | -18 R4-(-0.5R2) 2. Eliminate x2 6 –2 2 4 | 16 (Does not change.) 0 –4 2 2 | -6 (Does not change. Pivot is-4) 0 0 2 -5 | -9 0 0 4 -13 | -21 R4-2R3 3. Eliminate x3 6 –2 2 4 | 16 (Does not change.) 0 –4 2 2 | -6 (Does not change.) 0 0 2 -5 | -9 (Does not change. Pivot is 2) 0 0 0 -3 | -3

Naive Gauss Elimination Method Example 1 (cont’d) Step 2: Back substitution Find x4 x4 =(-3)/(-3) = 1 Find x3 x3 =(-9+5*1)/2 = -2 Find x2 x2 =(-6-2*(-2)-2*1)/(-4) = 1 Find x1 x1 =(16+2*1-2*(-2)-4*1)/6= 3

Naive Gauss Elimination Method Example 2 (Using 6 Significant Figures) 3.0 x1 - 0.1 x2 - 0.2 x3 = 7.85 0.1 x1 + 7.0 x2 - 0.3 x3 = -19.3 R2-(0.1/3)R1 0.3 x1 - 0.2 x2 + 10.0 x3 = 71.4 R3-(0.3/3)R1 Step 1: Forward elimination 3.00000 x1- 0.100000 x2 - 0.200000 x3 = 7.85000 7.00333 x2 - 0.293333 x3 = -19.5617 - 0.190000 x2 + 10.0200 x3 = 70.6150 3.00000 x1- 0.100000 x2 - 0.20000 x3 = 7.85000 10.0120 x3 = 70.0843

Naive Gauss Elimination Method Example 2 (cont’d) Step 2: Back substitution x3 = 7.00003 x2 = -2.50000 x1 = 3.00000 Exact solution: x3 = 7.0 x2 = -2.5 x1 = 3.0

Pitfalls of Gauss Elimination Methods Division by zero 2 x2 + 3 x3 = 8 4 x1 + 6 x2 + 7 x3 = -3 2 x1 + x2 + 6 x3 = 5 It is possible that during both elimination and back-substitution phases a division by zero can occur. Round-off errors In the previous example where up to 6 digits were kept during the calculations and still we end up with close to the real solution. x3 = 7.00003, instead of x3 = 7.0 a11 = 0 (the pivot element)

Pitfalls of Gauss Elimination (cont’d) 3. Ill-conditioned systems x1 + 2x2 = 10 1.1x1 + 2x2 = 10.4 x1 + 2x2 = 10 1.05x1 + 2x2 = 10.4 Ill conditioned systems are those where small changes in coefficients result in large change in solution. Alternatively, it happens when two or more equations are nearly identical, resulting a wide ranges of answers to approximately satisfy the equations. Since round off errors can induce small changes in the coefficients, these changes can lead to large solution errors.  x1 = 4.0 & x2 = 3.0  x1 = 8.0 & x2 = 1.0

Pitfalls of Gauss Elimination (cont’d) Singular systems. When two equations are identical, we would be dealing with case of n-1 equations for n unknowns. To check for singularity: After getting the forward elimination process and getting the triangle system, then the determinant for such a system is the product of all the diagonal elements. If a zero diagonal element is created, the determinant is Zero then we have a singular system. The determinant of a singular system is zero.

Techniques for Improving Solutions Use of more significant figures to solve for the round-off error. Pivoting. If a pivot element is zero, elimination step leads to division by zero. The same problem may arise, when the pivot element is close to zero. This Problem can be avoided by: Partial pivoting. Switching the rows so that the largest element is the pivot element. Complete pivoting. Searching for the largest element in all rows and columns then switching. Scaling Solve problem of ill-conditioned system. Minimize round-off error

Partial Pivoting Before each row is normalized, find the largest available coefficient in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element so that the largest coefficient is used as a pivot.

Use of more significant figures to solve for the round-off error :Example. Use Gauss Elimination to solve these 2 equations: (keeping only 4 sig. figures) 0.0003 x1 + 3.0000 x2 = 2.0001 1.0000 x1 + 1.0000 x2 = 1.000 - 9999.0 x2 = -6666.0 Solve: x2 = 0.6667 & x1 = 0.0 The exact solution is x2 = 2/3 & x1 = 1/3

Use of more significant figures to solve for the round-off error :Example (cont’d). 3 0.667 -3 4 0.6667 0.000 5 0.66667 0.3000 6 0.666667 0.33000 7 0.6666667 0.333000

Pivoting: Example Now, solving the pervious example using the partial pivoting technique: 1.0000 x1+ 1.0000 x2 = 1.000 0.0003 x1+ 3.0000 x2 = 2.0001 The pivot is 1.0 2.9997 x2 = 1.9998 x2 = 0.6667 & x1=0.3333 Checking the effect of the # of significant digits: # of dig x2 x1 Ea% in x1 4 0.6667 0.3333 0.01 5 0.66667 0.33333 0.001

Scaling: Example Solve the following equations using naïve gauss elimination: (keeping only 3 sig. figures) 2 x1+ 100,000 x2 = 100,000 x1 + x2 = 2.0 Forward elimination: 2 x1+ 10.0×104 x2 = 10.0×104 - 5.00 ×104 x2 = - 5.00 ×104 Solve x2 = 1.00 & x1 = 0.00 The exact solution is x1 = 1.00002 & x2 = 0.99998

Scaling: Example (cont’d) B) Using the scaling algorithm to solve: 2 x1+ 100,000 x2 = 100,000 x1 + x2 = 2.0 Scaling the first equation by dividing by 100,000: 0.00002 x1+ x2 = 1.0 x1+ x2 = 2.0 Rows are pivoted: x1 + x2 = 2.0 Forward elimination yield: x1 + x2 = 2.0 x2 = 1.00 Solve: x2 = 1.00 & x1 = 1.00 The exact solution is x1 = 1.00002 & x2 = 0.99998

Scaling: Example (cont’d) C) The scaled coefficient indicate that pivoting is necessary. We therefore pivot but retain the original coefficient to give: x1 + x2 = 2.0 2 x1+ 100,000 x2 = 100,000 Forward elimination yields: x1 + x2 = 2.0 100,000 x2 = 100,000 Solve: x2 = 1.00 & x1 = 1.00 Thus, scaling was useful in determining whether pivoting was necessary, but the equation themselves did not require scaling to arrive at a correct result.

Determinate Evaluation The determinate can be simply evaluated at the end of the forward elimination step, when the program employs partial pivoting:

a) Forward Elimination Example: Gauss Elimination a) Forward Elimination

Example: Gauss Elimination (cont’d)

Example: Gauss Elimination (cont’d)

Example: Gauss Elimination (cont’d) b) Back Substitution

Gauss-Jordan Elimination It is a variation of Gauss elimination. The major differences are: When an unknown is eliminated, it is eliminated from all other equations rather than just the subsequent ones. All rows are normalized by dividing them by their pivot elements. Elimination step results in an identity matrix. It is not necessary to employ back substitution to obtain solution.

Gauss-Jordan Elimination- Example

Dividing the 2nd row by 1. 6667 and reducing the second column Dividing the 2nd row by 1.6667 and reducing the second column. (operating above the diagonal as well as below) gives: Divide the 3rd row by 15.000 and make the elements in the 3rd Column zero.

Divide the 4th row by 1.5599 and create zero above the diagonal in the fourth column. Note: Gauss-Jordan method requires almost 50 % more operations than Gauss elimination; therefore it is not recommended to use it.