Gauss Jordan E. T. S. I. Caminos, Canales y Puertos.

Slides:



Advertisements
Similar presentations
E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 6.
Advertisements

E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 5.
Numerical Solution of Linear Equations
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Engineering Computation
Chapter: 3c System of Linear Equations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MATH 685/ CSI 700/ OR 682 Lecture Notes
Simultaneous Linear Equations
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Numerical Algorithms Matrix multiplication
Part 3 Chapter 9 Gauss Elimination
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Major: All Engineering Majors Author(s): Autar Kaw
Linear Algebraic Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Systems of Linear Equations
1 Systems of Linear Equations Gauss-Jordan Elimination and LU Decomposition.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Chapter.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Linear Algebraic Equations ~ Gauss Elimination Chapter.
Major: All Engineering Majors Author(s): Autar Kaw
Precalculus – MAT 129 Instructor: Rachel Graham Location: BETTS Rm. 107 Time: 8 – 11:20 a.m. MWF.
Chapter 8 Objectives Understanding matrix notation.
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Scientific Computing Linear Systems – LU Factorization.
Systems and Matrices (Chapter5)
ECON 1150 Matrix Operations Special Matrices
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Linear.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
MATH 250 Linear Equations and Matrices
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
Lesson 3 CSPP58001.
Gaussian Elimination Electrical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Gaussian Elimination Industrial Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Chapter 1 Systems of Linear Equations Linear Algebra.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
Gaussian Elimination Civil Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for STEM.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Matrices, Vectors, Determinants.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
1 Numerical Methods Solution of Systems of Linear Equations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9.
Pitfalls of Gauss Elimination ● Division by zero (avoid by pivoting) ● Round-off errors – Important with >100 eqns – Substitute answer into original eqn.
Part 3 Chapter 9 Gauss Elimination
Chapter: 3c System of Linear Equations
Spring Dr. Jehad Al Dallal
5 Systems of Linear Equations and Matrices
Numerical Analysis Lecture14.
Numerical Analysis Lecture10.
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Gauss Jordan E. T. S. I. Caminos, Canales y Puertos

Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations 2. Review steps of Gaussian Elimination 3. Examine how roundoff error can enter and be magnified in Gaussian Elimination 4. Introduce Pivoting and Scaling as defenses against roundoff. 5. Consider what an engineer can do to generate well formulated problems.

Systems of Equations In Part 2 we have tried to determine the value x, satisfying f(x)=0. In this part we try to obtain the values x1,x2, xn, satisfying the system of equations: These systems can be linear or nonlinear, but in this part we deal with linear systems:

Systems of Equations where a and b are constant coefficients, and n is the number of equations. Many of the engineering fundamental equations are based on conservation laws. In mathematical terms, these principles lead to balance or continuity equations relating the system behavior with respect to the amount of the magnitude being modelled and the extrenal stimuli acting on the system.

Systems of Equations Column 3 Row 2 Matrices are rectangular sets of elements represented by a single symbol. If the set if horizontal it is called row, and if it is vertical, it is called column. Column 3 Row 2 Column vector Row vector

Systems of Equations There are some special types of matrices: Identity matrix Symmetric matrix Diagonal matrix Upper triangular matrix

Lower triangular matrix Systems of Equations Lower triangular matrix Banded matrix Half band width All elements are null with the exception of thoise in a band centered around the main diagonal. This matrix has a band width of 3 and has the name of tridiagonal.

Systems of Equations Linear Algebraic Equations a11x1 + a12x2 + a13x3 + … + a1nxn = b1 a21x1 + a22x2 + a23x3 + … + a2nxn = b2 ….. an1x1 + an2x2 + an3x3 + … + anxn = bn where all aij's and bi's are constants. In matrix form: n x n n x 1 n x 1 or simply [A]{x} = {b}

Systems of Equations Matrix product: Matrix representation of a system Resulting dimensions

Systems of Equations Graphic Solution: Systems of equations are hyperplanes (straight lines, planes, etc.). The solution of a system is the intersection of these hyperplanes. Compatible and determined system. Vectors are linearly independent. Unique solution. Determinant of A is non-null.

Systems of Equations Incompatible system, Linearly dependent vectors. Null determinant of A. There is no solution. Compatible but undetermined system. Linearly dependent vectors. Null determinant of A. There exists an infinite number of solutions.

Systems of Equations Compatible and determined system. Linearly independent vectors. Nonnull determinant of A, but close to zero. There exists a solution but it is difficult to find precisely. It is an ill conditioned system leading to numerical errors.

Gauss elimination Naive Gauss elimination method: The Gauss’ method has two phases: Forward elimination and backsustitution. In the first, the system is reduced to an upper triangular system: First, the unknown x1 is eliminated. To this end, the first row is multiplied by -a21/a11 and added to the second row. The same is done with all other succesive rows (n-1 times) until only the first equation contains the first unknown x1. Pivot equation substract pivot

Gauss elimination This operation is repeated with all variables xi, until an upper triangular matrix is obtained. Next, the system is solved by backsustitution. The number of operations (FLOPS) used in the Gauss method is: Pass 1 Pass 2

Gauss elimination 1. Forward Elimination (Row Manipulation): a. Form augmented matrix [A|b]: b. By elementary row manipulations, reduce [A|b] to [U|b'] where U is an upper triangular matrix: DO i = 1 to n-1 DO k = i+1 to n Row(k) = Row(k) - (aki/aii)*Row(i) ENDDO

Gauss elimination 2. Back Substitution Solve the upper triangular system [U]{x} = {b´} xn = b'n / unn DO i = n-1 to 1 by (-1) END

Gauss elimination (example) Consider the system of equations To 2 significant figures, the exact solution is: We will use 2 decimal digit arithmetic with rounding.

Gauss elimination (example) Start with the augmented matrix: Multiply the first row by –1/50 and add to second row. Multiply the first row by –2/50 and add to third row: Multiply the second row by –6/40

Gauss elimination (example) Now backsolve: (vs. 0.091, et = 2.2%) (vs. 0.041, et = 2.5%) (vs. 0.016, et = 0%)

Gauss elimination (example) Consider an alternative solution interchanging rows: After forward elimination, we obtain: Now backsolve: x3 = 0.095 (vs. 0.091, et = 4.4%) x2 = 0.020 (vs. 0.041, et = 50%) x1 = 0.000 (vs. 0.016, et = 100%) Apparently, the order of the equations matters!

Gauss elimination (example) WHAT HAPPENED? When we used 50 x1 + 1 x2 + 2 x3 = 1 to solve for x1, there was little change in other equations. When we used 2 x1 + 6 x2 + 30 x3 = 3 to solve for x1 it made BIG changes in the other equations. Some coefficients for other equations were lost! The second equation has little to do with x1. It has mainly to do with x3. As a result we obtained LARGE numbers in the table, significant roundoff error occurred and information was lost. Things didn't go well! If scaling factors | aji / aii | are  1 then the effect of roundoff errors is diminished.

Gauss elimination (example) Effect of diagonal dominance: As a first approximation roots are: xi  bi / aii Consider the previous examples:

Gauss elimination (example) Goals: 1. Best accuracy (i.e. minimize error) 2. Parsimony (i.e. minimize effort) Possible Problems: A. Zero on diagonal term  ÷ by zero. B. Many floating point operations (flops) cause numerical precision problems and propagation of errors. C. System may be ill-conditioned: det[A]  0. D. No solution or an infinite # of solutions: det[A] = 0. Possible Remedies: A. Carry more significant figures (double precision). B. Pivot when the diagonal is close to zero. C. Scale to reduce round-off error.

Gauss elimination (pivoting) A. Row pivoting (Partial Pivoting) - In any good routine, at each step i, find maxk | aki | for k = i, i+1, i+2, ..., n Move corresponding row to pivot position. (i) Avoids zero aii (ii) Keeps numbers small & minimizes round-off, (iii) Uses an equation with large | aki | to find xi Maintains diagonal dominance. Row pivoting does not affect the order of the variables. Included in any good Gaussian Elimination routine.

Gauss elimination (pivoting) B. Column pivoting - Reorder remaining variables xj for j = i, . . . ,n so get largest | aji | Column pivoting changes the order of the unknowns, xi, and thus leads to complexity in the algorithm. Not usually done. C. Complete or Full pivoting Performing both row pivoting and column pivoting. (If [A] is symmetric, needed to preserve symmetry.)

Gauss elimination (pivoting) How to fool pivoting: Multiply the third equation by 100 and then performing pivoting will yield: Forward elimination then yields (2-digit arithmetic): Backsolution yields: x3 = 0.095 (vs. 0.091, et = 4.4%) x2 = 0.020 (vs. 0.041, et = 50.0%) x1 = 0.000 (vs. 0.016, et = 100%) The order of the rows is still poor!!

Gauss elimination (scaling) A. Express all equations (and variables) in comparable units so all elements of [A] are about the same size. B. If that fails, and maxj |aij| varies widely across the rows, replace each row i by: aij  This makes the largest coefficient |aij| of each equation equal to 1 and the largest element of [A] equal to 1 or -1 NOTE: Routines generally do not scale automatically; scaling can cause round-off error too! SOLUTIONS • Don't actually scale, but use hypothetical scaling factors to determine what pivoting is necessary. • Scale only by powers of 2: no roundoff or division required.

Gauss elimination (scaling) How to fool scaling: A poor choice of units can undermine the value of scaling. Begin with our original example: If the units of x1 were expressed in µg instead of mg the matrix might read: Scaling then yields: Which equation is used to determine x1 ? Why bother to scale ?

Gauss elimination (operation counting) In numerical scientific calculations, the number of multiplies & divides often determines CPU time. (This represents the numerical effort!) One floating point multiply or divide (plus any associated adds or subtracts) is called a FLOP. (The adds/subtracts use little time compared to the multiplies/divides.) FLOP = FLoating point OPeration. Examples: a * x + b a / x – b

Gauss elimination (operation counting) Useful identities in counting FLOPS: O(mn) means that there are terms of order mn and lower.

Gauss elimination (operation counting) Simple Example of Operation Counting: DO i = 1 to n Y(i) = X(i)/i – 1 ENDDO X(i) and Y(i) are arrays whose values change when i changes. In each iteration X(i)/i – 1 represents one FLOP because it requires one division (& one subtraction). The DO loop extends over i from 1 to n iterations:

Gauss elimination (operation counting) Another Example of Operation Counting: DO i = 1 to n Y(i) = X(i) X(i) + 1 DO j = i to n Z(j) = [ Y(j) / X(i) ] Y(j) + X(i) ENDDO With nested loops, always start from the innermost loop. [Y(j)/X(i)] * Y(j) + X(i) represents 2 FLOPS

Gauss elimination (operation counting) For the outer i-loop: X(i) • X(i) + 1 represents 1 FLOP = 3n +2n2 - n2 - n = n2 + 2n = n2 + O(n)

Gauss elimination (operation counting) Forward Elimination: DO k = 1 to n–1 DO i = k+1 to n r = A(i,k)/A(k,k) DO j = k+1 to n A(i,j)=A(i,j) – r*A(k,j) ENDDO B(i) = B(i) – r*B(k)

Gauss elimination (operation counting) Operation Counting for Gaussian Elimination Back Substitution: X(n) = B(n)/A(n,n) DO i = n–1 to 1 by –1 SUM = 0 DO j = i+1 to n SUM = SUM + A(i,j)*X(j) ENDDO X(i) = [B(i) – SUM]/A(i,i)

Gauss elimination (operation counting) Operation Counting for Gaussian Elimination Forward Elimination Inner loop: Second loop: = (n2 + 2n) – 2(n + 1)k + k2

Gauss elimination (operation counting) Operation Counting for Gaussian Elimination Forward Elimination (cont'd) Outer loop =

Gauss elimination (operation counting) Operation Counting for Gaussian Elimination Back Substitution Inner Loop: Outer Loop:

Gauss elimination (operation counting) Total flops = Forward Elimination + Back Substitution = n3/3 + O (n2) + n2/2 + O (n)  n3/3 + O (n2) To convert (A,b) to (U,b') requires n3/3, plus terms of order n2 and smaller, flops. To back solve requires: 1 + 2 + 3 + 4 + . . . + n = n (n+1) / 2 flops; Grand Total: the entire effort requires n3/3 + O(n2) flops altogether.

Gauss-Jordan Elimination Diagonalization by both forward and backward elimination in each column. Perform elimination both backwards and forwards until: Operation count for Gauss-Jordan is: (slower than Gauss elimination)

Gauss-Jordan Elimination Example (two-digit arithmetic): x1 = 0.015 (vs. 0.016, et = 6.3%) x2 = 0.041 (vs. 0.041, et = 0%) x3 = 0.093 (vs. 0.091, et = 2.2%)

Gauss-Jordan Matrix Inversion The solution of: [A]{x} = {b} is: {x} = [A]-1{b} where [A]-1 is the inverse matrix of [A] Consider: [A] [A]-1 = [ I ] 1) Create the augmented matrix: [ A | I ] 2) Apply Gauss-Jordan elimination: ==> [ I | A-1 ]

Gauss-Jordan Matrix Inversion Gauss-Jordan Matrix Inversion (with 2 digit arithmetic): MATRIX INVERSE [A-1]

Gauss-Jordan Matrix Inversion CHECK: [ A ] [ A ]-1 = [ I ] [ A ]-1 { b } = { x } Gaussian Elimination

LU decomposition LU decomposition - The LU decomposition is a method that uses the elimination techniques to transform the matrix A in a product of triangular matrices. This is specially useful to solve systems with different vectors b, because the same decomposition of matrix A can be used to evaluate in an efficient form, by forward and backward sustitution, all cases.

LU decomposition Initial system Decomposition Transformed system 1 Substitution Transformed system 2 Backward sustitution Forward sustitution

LU decomposition LU decomposition is very much related to Gauss method, because the upper triangular matrix is also looked for in the LU decomposition. Thus, only the lower triangular matrix is needed. Surprisingly, during the Gauss elimination procedure, this matrix L is obtained, but one is not aware of this fact. The factors we use to get zeroes below the main diagonal are the elements of this matrix L. Substract

LU decomposition resto resto

LU decomposition (Complexity) Basic Approach Consider [A]{x} = {b} a) Gauss-type "decomposition" of [A] into [L][U] n3/3 flops [A]{x} = {b} becomes [L] [U]{x} = {b}; let [U]{x}  {d} b) First solve [L] {d} = {b} for {d} by forward subst. n2/2 flops c) Then solve [U]{x} = {d} for {x} by back substitution n2/2 flops

LU Decompostion: notation [A] = [L] + [U0] [A] = [L0] + [U] [A] = [L0] + [U0] + [D] [A] = [L1] [U] [A] = [L] [U1]

LU decomposition LU Decomposition Variations Doolittle [L1][U] General [A] Crout [L][U1] General [A] Cholesky [L][L] T Pos. Def. Symmetric [A] Cholesky works only for Positive Definite Symmetric matrices Doolittle versus Crout: • Doolittle just stores Gaussian elimination factors where Crout uses a different series of calculations (see C&C 10.1.4). • Both decompose [A] into [L] and [U] in n3/3 FLOPS • Different location of diagonal of 1's • Crout uses each element of [A] only once so the same array can be used for [A] and [L\U] saving computer memory!

Definition of a matrix inverse: LU decomposition Matrix Inversion Definition of a matrix inverse: [A] [A]-1 = [ I ] ==> [A] {x} = {b} [A]-1 {b} = {x} First Rule: Don’t do it. (numerically unstable calculation)

LU decomposition Matrix Inversion If you really must -- 1) Gaussian elimination: [A | I ] –> [U | B'] ==> A-1 2) Gauss-Jordan: [A | I ] ==> [I | A-1 ] Inversion will take n3 + O(n2) flops if one is careful about where zeros are (taking advantage of the sparseness of the matrix) Naive applications (without optimization) take 4n3/3 + O(n2) flops. For example, LU decomposition requires n3/3 + O(n2) flops. Back solving twice with n unit vectors ei: 2 n (n2/2) = n3 flops. Altogether: n3/3 + n3 = 4n3/3 + O(n2) flops