Download presentation
Presentation is loading. Please wait.
Published byElfreda Sullivan Modified over 8 years ago
1
Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
2
Chapter Objectives Knowing how to solve small sets of linear equations with the graphical method and Cramer’s rule. Understanding how to implement forward elimination and back substitution as in Gauss elimination. Understanding how to count flops to evaluate the efficiency of an algorithm. Understanding the concepts of singularity and ill-condition. Understanding how partial pivoting is implemented and how it differs from complete pivoting. Recognizing how the banded structure of a tridiagonal system can be exploited to obtain extremely efficient solutions.
3
Graphical Method For small sets of simultaneous equations, graphing them and determining the location of the intercept provides a solution.
4
Graphical Method (cont) Graphing the equations can also show systems where: a)No solution exists b)Infinite solutions exist c)System is ill-conditioned
5
Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual entry of a matrix is an element (example: a 23 )
6
Representing Linear Equations Matrices provide a concise notation for representing and solving simultaneous linear equations:
7
Matrix Multiplication The elements in the matrix [C] that results from multiplying matrices [A] and [B] are calculated using:
8
Special Matrices Matrices where m = n are called square matrices. Special forms of square matrices:
9
Matrix Inverse and Transpose The inverse of a square, nonsingular matrix [A] is that matrix which, when multiplied by [A], yields the identity matrix. [A][A] -1 = [A] -1 [A] = [I] The transpose of a matrix involves transforming its rows into columns and its columns into rows. (a ij ) T =a ji
10
Determinants The determinant D=|A| of a matrix is formed from the coefficients of [A]. Determinants for small matrices are: Determinants for matrices larger than 3 x 3 can be very tedious
11
Cramer’s Rule Cramer’s Rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b 1, b 2, …, b n.
12
Cramer’s Rule Example Find x 2 in the following system of equations: Find the determinant D Find determinant D 2 by replacing D’s second column with b Divide
13
Solving With MATLAB MATLAB provides two direct ways to solve systems of linear algebraic equations [A]{x}={b}: –Left-division x = A\b –Matrix inversion x = inv(A)*b The matrix inverse is less efficient than left- division and also only works for square, non- singular systems.
14
Naïve Gauss Elimination For larger systems, Cramer’s Rule can become unwieldy. Instead, a sequential process of removing unknowns from equations using forward elimination followed by back substitution may be used - this is Gauss elimination. “Naïve” Gauss elimination simply means the process does not check for potential problems resulting from division by zero.
15
Naïve Gauss Elimination (cont) Forward elimination –Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond. –Continue this process with the second row to remove the second coefficient from the third row and beyond. –Stop when an upper triangular matrix remains. Back substitution –Starting with the last row, solve for the unknown, then substitute that value into the next highest row. –Because of the upper-triangular nature of the matrix, each row will contain only one more unknown.
16
Naïve Gauss Elimination Program function x = GaussNaive(A,b) % GaussNaive: naive Gauss elimination % x = GaussNaive(A,b): Gauss elimination without pivoting. % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n] = size(A); if m~=n, error('Matrix A must be square'); end nb = n+1; Aug = [A b]; % forward elimination for k = 1:n-1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb); end % back substitution x = zeros(n,1); x(n) = Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i); end
17
function x = GaussNaive(A,b) % GaussNaive: naive Gauss elimination % x = GaussNaive(A,b): Gauss elimination % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n] = size(A); if m~=n, error('Matrix A must be square'); end nb = n+1; Aug = [A b];
18
% forward elimination for k = 1:n-1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)- factor*Aug(k,k:nb); end % back substitution x = zeros(n,1); x(n) = Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i) end
19
Gauss Program Efficiency The execution of Gauss elimination depends on the amount of floating-point operations (or flops). The flop count for an n x n system is: Conclusions: –As the system gets larger, the computation time increases greatly. –Most of the effort is incurred in the elimination step.
20
QUESTION: I think there is a mistake in the book on page 240. It says for k=1, i=2, and +/- flops=(n-1)*(n) The program says: For i=k+1:n factor=Aug(i,k)/Aug(k,k) Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb) How many subtractions are for k=1? nb=n+1 so if k=1, then i=2, and assuming n=4, then nb=5, then we have that: Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb)= Aug(2,1)=Aug(2,1)-factor*Aug(1,1) Aug(2,2)=Aug(2,2)-factor*Aug(1,2) Aug(2,3)=Aug(2,3)-factor*Aug(1,3) Aug(2,4)=Aug(2,4)-factor*Aug(1,4) Aug(2,5)=Aug(2,5)-factor*Aug(1,5) The number of subtractions is clearly 5, or (n+1) and the interior loop goes from 2:n so the number of subtraction flops for k=1 is (n+1)*(n-1) the book says (n-1)*n in the table on page 240.
21
ANSWER: Yes, you are right; the formulas in the Table of page 240 do not correspond precisely to the program in Figure 9.4. All second factors in the add/sub column should be added 1 and therefore all corresponding in the mult/div column also should be added 1, so the (n+1) should be (n+2) for k=1. This certainly modifies equations 9.16 through 9.19. How can a book go wrong? Well perhaps the book used another program slightly more difficult to understand but doing one less operation per row when it calculates: Aug(i,k:nb)=Aug(i,k:nb)-factor*(k,k:nb) Since the first term, Aug(k+1,k) will became zero anyway, because factor=Aug(i,k)/Aug(k,k), and substituting, Aug(i,k:nb)=Aug(i,k)-factor*Aug(k,k) = Aug(i,k)-(Aug(i,k)/Aug(k,k))*Aug(k,k) = 0 Thus, we do not need to start the calculations in k but in k+1; therefore, if you change in the program the sentence, Aug(i,k:nb)=Aug(i,k:nb)-factor*(k,k:nb) to Aug(i,k+1:nb)=Aug(i,k+1:nb)-factor*Aug(k,k+1:nb) You will have one less operation per row and therefore all formulas in the book are correct now but only if you do the above change to the program. The results (solutions) will not change if you change that line of the program as we simply are not calculating the zero elements that are cancelled by the forward elimination.
22
Pivoting Problems arise with naïve Gauss elimination if a coefficient along the diagonal is 0 (problem: division by 0) or close to 0 (problem: round-off error) One way to combat these issues is to determine the coefficient with the largest absolute value in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. This is called partial pivoting. If the rows to the right of the pivot element are also checked and columns switched, this is called complete pivoting.
23
Partial Pivoting Program function x = GaussPivot(A,b) % GaussPivot: Gauss elimination pivoting % x = GaussPivot(A,b): Gauss elim. with pivoting % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n]=size(A); if m~=n, error('Matrix A must be square'); end nb=n+1; Aug=[A b];
24
% forward elimination for k = 1:n-1 % partial pivoting [big,i]=max(abs(Aug(k:n,k))); ipr=i+k-1; if ipr~=k Aug([k,ipr],:)=Aug([ipr,k],:); end for i = k+1:n factor=Aug(i,k)/Aug(k,k); Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb) end
25
% back substitution x=zeros(n,1); x(n)=Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i)=(Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i) end
26
Tridiagonal Systems A tridiagonal system is a banded system with a bandwidth of 3: Tridiagonal systems can be solved using the same method as Gauss elimination, but with much less effort because most of the matrix elements are already 0.
27
Tridiagonal System Solver function x = Tridiag(e,f,g,r) % Tridiag: Tridiagonal eq. solver banded system % x = Tridiag(e,f,g,r): Tridiagonal system solver. % input: %e=subdiagonal vector %f=diagonal vector %g=superdiagonal vector %r=right hand side vector %output: %x=solution vector n=length(f); % forward elimination for k = 2:n factor = e(k)/f(k-1); f(k) = f(k) - factor*g(k-1); r(k) = r(k) - factor*r(k-1); end % back substitution x(n) = r(n)/f(n); for k = n-1:-1:1 x(k) = (r(k)-g(k)*x(k+1))/f(k); end
28
n=length(f); % forward elimination for k = 2:n factor = e(k)/f(k-1); f(k) = f(k) - factor*g(k-1); r(k) = r(k) - factor*r(k-1); end % back substitution x(n) = r(n)/f(n); for k = n-1:-1:1 x(k) = (r(k)-g(k)*x(k+1))/f(k); end
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.