Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,

Slides:



Advertisements
Similar presentations
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Advertisements

Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Part 3 Chapter 9 Gauss Elimination
Chapter 9 Gauss Elimination The Islamic University of Gaza
Major: All Engineering Majors Author(s): Autar Kaw
Chapter 9 Gauss Elimination.
Ecuaciones Algebraicas lineales. An equation of the form ax+by+c=0 or equivalently ax+by=- c is called a linear equation in x and y variables. ax+by+cz=d.
Linear Algebraic Equations
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
Matrices and Systems of Equations
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 14 Elimination Methods.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Algorithm for Gauss elimination 1) first eliminate for each eq. j, j=1 to n-1 for all eq.s k greater than j a) multiply eq. j by a kj /a jj b) subtract.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Part 3 Chapter 8 Linear Algebraic Equations and Matrices PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Chapter.
Matrix Mathematics in MATLAB and Excel
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Linear Algebraic Equations ~ Gauss Elimination Chapter.
Major: All Engineering Majors Author(s): Autar Kaw
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Introduction to MATLAB 7 for Engineers William J. Palm.
Chapter 8 Objectives Understanding matrix notation.
Introduction to MATLAB for Engineers, Third Edition William J. Palm III Chapter 8 Linear Algebraic Equations PowerPoint to accompany Copyright © 2010.
Scientific Computing Linear Systems – LU Factorization.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. A Concise Introduction to MATLAB ® William J. Palm III.
Systems and Matrices (Chapter5)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Linear.
Rev.S08 MAC 1140 Module 10 System of Equations and Inequalities II.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 11.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Introduction to MATLAB 7 for Engineers William J. Palm.
Chemical Engineering Majors Author(s): Autar Kaw
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
Gaussian Elimination Electrical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
ES 240: Scientific and Engineering Computation. Chapter 8 Chapter 8: Linear Algebraic Equations and Matrices Uchechukwu Ofoegbu Temple University.
ME 142 Engineering Computation I Matrix Operations in Excel.
Chapter 9 Gauss Elimination The Islamic University of Gaza
1 Gauss Elimination Small Matrices  For small numbers of equations, solvable by hand  Graphical  Cramer's rule  Elimination 2.
Introduction to MATLAB 7
Gaussian Elimination Industrial Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Gaussian Elimination Mechanical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Gaussian Elimination Civil Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for STEM.
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 8 Linear Algebraic Equations and Matrices.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
ROOTS OF EQUATIONS. Graphical Methods A simple method for obtaining the estimate of the root of the equation f(x)=0 is to make a plot of the function.
1 Numerical Methods Solution of Systems of Linear Equations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9.
College Algebra Chapter 6 Matrices and Determinants and Applications
Part 3 Chapter 9 Gauss Elimination
Linear Algebraic Equations and Matrices
Chapter 7 Matrix Mathematics
MATHEMATICS Linear Equations
Linear Algebraic Equations and Matrices
Part 3 - Chapter 9.
Engineering Analysis ENG 3420 Fall 2009
Part 3 Chapter 10 LU Factorization
Lecture 13 Simultaneous Linear Equations – Gaussian Elimination (2) Partial Pivoting Dr .Qi Ying.
Presentation transcript:

Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Chapter Objectives Knowing how to solve small sets of linear equations with the graphical method and Cramer’s rule. Understanding how to implement forward elimination and back substitution as in Gauss elimination. Understanding how to count flops to evaluate the efficiency of an algorithm. Understanding the concepts of singularity and ill-condition. Understanding how partial pivoting is implemented and how it differs from complete pivoting. Recognizing how the banded structure of a tridiagonal system can be exploited to obtain extremely efficient solutions.

Graphical Method For small sets of simultaneous equations, graphing them and determining the location of the intercept provides a solution.

Graphical Method (cont) Graphing the equations can also show systems where: a)No solution exists b)Infinite solutions exist c)System is ill-conditioned

Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual entry of a matrix is an element (example: a 23 )

Representing Linear Equations Matrices provide a concise notation for representing and solving simultaneous linear equations:

Matrix Multiplication The elements in the matrix [C] that results from multiplying matrices [A] and [B] are calculated using:

Special Matrices Matrices where m = n are called square matrices. Special forms of square matrices:

Matrix Inverse and Transpose The inverse of a square, nonsingular matrix [A] is that matrix which, when multiplied by [A], yields the identity matrix. [A][A] -1 = [A] -1 [A] = [I] The transpose of a matrix involves transforming its rows into columns and its columns into rows. (a ij ) T =a ji

Determinants The determinant D=|A| of a matrix is formed from the coefficients of [A]. Determinants for small matrices are: Determinants for matrices larger than 3 x 3 can be very tedious

Cramer’s Rule Cramer’s Rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b 1, b 2, …, b n.

Cramer’s Rule Example Find x 2 in the following system of equations: Find the determinant D Find determinant D 2 by replacing D’s second column with b Divide

Solving With MATLAB MATLAB provides two direct ways to solve systems of linear algebraic equations [A]{x}={b}: –Left-division x = A\b –Matrix inversion x = inv(A)*b The matrix inverse is less efficient than left- division and also only works for square, non- singular systems.

Naïve Gauss Elimination For larger systems, Cramer’s Rule can become unwieldy. Instead, a sequential process of removing unknowns from equations using forward elimination followed by back substitution may be used - this is Gauss elimination. “Naïve” Gauss elimination simply means the process does not check for potential problems resulting from division by zero.

Naïve Gauss Elimination (cont) Forward elimination –Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond. –Continue this process with the second row to remove the second coefficient from the third row and beyond. –Stop when an upper triangular matrix remains. Back substitution –Starting with the last row, solve for the unknown, then substitute that value into the next highest row. –Because of the upper-triangular nature of the matrix, each row will contain only one more unknown.

Naïve Gauss Elimination Program function x = GaussNaive(A,b) % GaussNaive: naive Gauss elimination % x = GaussNaive(A,b): Gauss elimination without pivoting. % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n] = size(A); if m~=n, error('Matrix A must be square'); end nb = n+1; Aug = [A b]; % forward elimination for k = 1:n-1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb); end % back substitution x = zeros(n,1); x(n) = Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i); end

function x = GaussNaive(A,b) % GaussNaive: naive Gauss elimination % x = GaussNaive(A,b): Gauss elimination % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n] = size(A); if m~=n, error('Matrix A must be square'); end nb = n+1; Aug = [A b];

% forward elimination for k = 1:n-1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)- factor*Aug(k,k:nb); end % back substitution x = zeros(n,1); x(n) = Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i) end

Gauss Program Efficiency The execution of Gauss elimination depends on the amount of floating-point operations (or flops). The flop count for an n x n system is: Conclusions: –As the system gets larger, the computation time increases greatly. –Most of the effort is incurred in the elimination step.

QUESTION: I think there is a mistake in the book on page 240. It says for k=1, i=2, and +/- flops=(n-1)*(n) The program says: For i=k+1:n factor=Aug(i,k)/Aug(k,k) Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb) How many subtractions are for k=1? nb=n+1 so if k=1, then i=2, and assuming n=4, then nb=5, then we have that: Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb)= Aug(2,1)=Aug(2,1)-factor*Aug(1,1) Aug(2,2)=Aug(2,2)-factor*Aug(1,2) Aug(2,3)=Aug(2,3)-factor*Aug(1,3) Aug(2,4)=Aug(2,4)-factor*Aug(1,4) Aug(2,5)=Aug(2,5)-factor*Aug(1,5) The number of subtractions is clearly 5, or (n+1) and the interior loop goes from 2:n so the number of subtraction flops for k=1 is (n+1)*(n-1) the book says (n-1)*n in the table on page 240.

ANSWER: Yes, you are right; the formulas in the Table of page 240 do not correspond precisely to the program in Figure 9.4. All second factors in the add/sub column should be added 1 and therefore all corresponding in the mult/div column also should be added 1, so the (n+1) should be (n+2) for k=1. This certainly modifies equations 9.16 through How can a book go wrong? Well perhaps the book used another program slightly more difficult to understand but doing one less operation per row when it calculates: Aug(i,k:nb)=Aug(i,k:nb)-factor*(k,k:nb) Since the first term, Aug(k+1,k) will became zero anyway, because factor=Aug(i,k)/Aug(k,k), and substituting, Aug(i,k:nb)=Aug(i,k)-factor*Aug(k,k) = Aug(i,k)-(Aug(i,k)/Aug(k,k))*Aug(k,k) = 0 Thus, we do not need to start the calculations in k but in k+1; therefore, if you change in the program the sentence, Aug(i,k:nb)=Aug(i,k:nb)-factor*(k,k:nb) to Aug(i,k+1:nb)=Aug(i,k+1:nb)-factor*Aug(k,k+1:nb) You will have one less operation per row and therefore all formulas in the book are correct now but only if you do the above change to the program. The results (solutions) will not change if you change that line of the program as we simply are not calculating the zero elements that are cancelled by the forward elimination.

Pivoting Problems arise with naïve Gauss elimination if a coefficient along the diagonal is 0 (problem: division by 0) or close to 0 (problem: round-off error) One way to combat these issues is to determine the coefficient with the largest absolute value in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. This is called partial pivoting. If the rows to the right of the pivot element are also checked and columns switched, this is called complete pivoting.

Partial Pivoting Program function x = GaussPivot(A,b) % GaussPivot: Gauss elimination pivoting % x = GaussPivot(A,b): Gauss elim. with pivoting % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n]=size(A); if m~=n, error('Matrix A must be square'); end nb=n+1; Aug=[A b];

% forward elimination for k = 1:n-1 % partial pivoting [big,i]=max(abs(Aug(k:n,k))); ipr=i+k-1; if ipr~=k Aug([k,ipr],:)=Aug([ipr,k],:); end for i = k+1:n factor=Aug(i,k)/Aug(k,k); Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb) end

% back substitution x=zeros(n,1); x(n)=Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i)=(Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i) end

Tridiagonal Systems A tridiagonal system is a banded system with a bandwidth of 3: Tridiagonal systems can be solved using the same method as Gauss elimination, but with much less effort because most of the matrix elements are already 0.

Tridiagonal System Solver function x = Tridiag(e,f,g,r) % Tridiag: Tridiagonal eq. solver banded system % x = Tridiag(e,f,g,r): Tridiagonal system solver. % input: %e=subdiagonal vector %f=diagonal vector %g=superdiagonal vector %r=right hand side vector %output: %x=solution vector n=length(f); % forward elimination for k = 2:n factor = e(k)/f(k-1); f(k) = f(k) - factor*g(k-1); r(k) = r(k) - factor*r(k-1); end % back substitution x(n) = r(n)/f(n); for k = n-1:-1:1 x(k) = (r(k)-g(k)*x(k+1))/f(k); end

n=length(f); % forward elimination for k = 2:n factor = e(k)/f(k-1); f(k) = f(k) - factor*g(k-1); r(k) = r(k) - factor*r(k-1); end % back substitution x(n) = r(n)/f(n); for k = n-1:-1:1 x(k) = (r(k)-g(k)*x(k+1))/f(k); end