Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Gauss Elimination Small Matrices  For small numbers of equations, solvable by hand  Graphical  Cramer's rule  Elimination 2.

Similar presentations


Presentation on theme: "1 Gauss Elimination Small Matrices  For small numbers of equations, solvable by hand  Graphical  Cramer's rule  Elimination 2."— Presentation transcript:

1

2 1 Gauss Elimination

3 Small Matrices  For small numbers of equations, solvable by hand  Graphical  Cramer's rule  Elimination 2

4 3 2x 1 – x 2 = 3x 1 + x 2 = 3 One solution Graphical Method

5 2x 1 – x 2 = 3 2x 1 – x 2 = – 1 No solution Graphical Method

6 6x 1 – 3x 2 = 9 2x 1 – x 2 = 3 Infinite many solutions Graphical Method

7 2x 1 – x 2 = 3 2.1x 1 – x 2 = 3 ill conditioned Graphical Method 方程式斜率非常接近,視覺上無法判定交點

8 Cramer’s Rule: Determinant  Compute the determinant D ( 行列式 )  2 x 2 matrix  3 x 3 matrix 7

9 Cramer’s Rule  To find x k for the following system  Replace k th column of as with bs (i.e., a ik  b i ) 8

10 Cramer’s Rule :例  3x3 matrix 9

11 ILL-Conditioned System  What happen if the determinant D is very small or zero?  Divided by zero (linearly dependent system)  Divided by a small number: Round-off error  Loss of significant digits 10

12 Eliminate x 2  Subtract to get Not practical for large number (> 4) of equations Elimination Method

13 MATLAB’s Methods  Forward slash ( / )  Back-slash ( \ )  Multiplication by the inverse of the quantity under the slash 12

14 Gauss Elimination  Manipulate equations to remove one of the unknowns  Develop algorithm to do this repeatedly  The goal is to set up upper triangular matrix  Back substitution to find solution (root) 13

15 Basic Gauss Elimination  Direct method (no iteration required)  Forward elimination  Column-by-column elimination of the below- diagonal elements  Reduce to upper triangular matrix  Back-substitution 14

16 Naive Gauss Elimination ( 單純高斯消去法 )  Begin with  Multiply the first equation by a 21 / a 11 and subtract from second equation 15

17 Forward Elimination  Reduce to  Repeat the forward elimination to get 16

18 Forward Elimination  First equation is pivot equation ( 軸元方程式 )  a 11 is pivot element ( 軸元元素 )  Now multiply second equation by a' 32 /a' 22 and subtract from third equation 17

19 Forward Elimination  Repeat the elimination of a i2 and get  Continue and get 18

20 Back Substitution ( 向後代換 )  Now we can perform back substitution to get {x}  By simple division  Substitute this into (n-1) th equation  Solve for x n-1  Repeat the process to solve for x n-2, x n-3, …. x 2, x 1 19

21 Back Substitution  Back substitution: starting with x n  Solve for x n  1, x n  2, …, x 3, x 2, x 1 20 for i = n  1, n  2, …, 1 Naive Gauss Elimination

22 Elimination of First Column

23 Elimination of Second Column

24 Upper triangular matrix Elimination of Third Column

25 Upper triangular matrix Back Substitution

26 Example

27 Forward Elimination

28 Upper Triangular Matrix

29 Back Substitution

30 29 function x = GaussNaive(A,b) % x = GaussNaive(A,b): % Gauss elimination without pivoting. % input: % A = coefficient matrix % b = right hand side vector % output: % x = solution vector [m,n] = size(A); if m~=n, error('Matrix A must be square'); end nb = n+1; Aug = [A b]; % forward elimination for k = 1:n-1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb); end disp(Aug); end % back substitution x = zeros(n,1); x(n) = Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i); end

31 Print all factor and Aug (do not suppress output) Eliminate first column Eliminate second column Eliminate third column Back-substitution Aug = [A, b] >> format short >> x = GaussNaive(A,b) m = 4 n = 4 Aug = 1 0 2 3 1 -1 2 2 -3 -1 0 1 1 4 2 6 2 2 4 1 factor = Aug = 1 0 2 3 1 0 2 4 0 0 0 1 1 4 2 6 2 2 4 1 factor = 0 Aug = 1 0 2 3 1 0 2 4 0 0 0 1 1 4 2 6 2 2 4 1 factor = 6 Aug = 1 0 2 3 1 0 2 4 0 0 0 1 1 4 2 0 2 -10 -14 -5 factor = 0.5000 Aug = 1 0 2 3 1 0 2 4 0 0 0 0 -1 4 2 0 2 -10 -14 -5 factor = 1 Aug = 1 0 2 3 1 0 2 4 0 0 0 0 -1 4 2 0 0 -14 -14 -5 factor = 14 Aug = 1 0 2 3 1 0 2 4 0 0 0 0 -1 4 2 0 0 0 -70 -33 x4x4 x3x3 x2x2 x1x1 x = 0 0.4714 x = 0 -0.1143 0.4714 x = 0 0.2286 -0.1143 0.4714 x = -0.1857 0.2286 -0.1143 0.4714

32 Gauss Elimination: Algorithm  Forward elimination  for each equation j, j = 1 to n-1 for all equations k greater than j (a) multiply equation j by a kj /a jj (b) subtract the result from equation k  This leads to an upper triangular matrix  Back substitution (a) determine x n from (b) put x n into (n-1) th equation, solve for x n-1 (c) repeat from (b), moving back to n-2, n-3, etc. until all equations are solved 31

33 Operation Count For Gauss Elimination  Elimination routine on the order of O(n 3 /3) operations  Total operation counts for elimination stage = 2n 3 /3 + O(n 2 )  Back-substitution uses O(n 2 /2)  Total operation counts for back substitution stage = n 2 + O(n)

34 Operation Count For Gauss Elimination  #flops (floating-point operations) for Naive Gauss elimination  Computation time increases rapidly with n  Most effort incurs in the elimination step  Improve efficiency by reducing the elimination effort 33

35 Partial Pivoting ( 部分軸元法 )  Problems with Gauss elimination  division by zero  round off errors  ill conditioned systems  Use “pivoting” to avoid this  Find the row with largest absolute coefficient below the pivot element  Switch rows (“partial pivoting”)  Complete pivoting switch columns (rarely used) 34

36 Round-off Errors ( 捨入誤差 )  A lot of chopping with more than n 3 /3 operations  More important - error is propagated  For large systems (more than 100 equations), round-off error is important (machine dependent)  Ill conditioned systems - small changes in coefficients lead to large changes in solution  Round-off errors are especially important for ill- conditioned systems 35

37 2x 1 – x 2 = 3 2.1x 1 – x 2 = 3 Ill-conditioned System

38  Consider  Since slopes are almost equal Divided by small number

39 Determinant  Calculate determinant using Gauss elimination 38

40 Gauss Elimination with Partial Pivoting  Forward elimination  for each equation j, j = 1 to n-1 first scale each equation k greater than j then pivot (switch rows) Now perform the elimination (a) multiply equation j by a kj /a jj (b) subtract the result from equation 39

41 Partial (Row) Pivoting

42 Interchange rows 1 & 4 Forward Elimination

43 No interchange required Forward Elimination

44 Interchange rows 3 & 4 Back-Substitution

45 44 function x = GaussPivot(A,b) % x = GaussPivot(A,b): % Gauss elimination without pivoting. [m,n]=size(A); if m~=n, error('Matrix A must be square'); end nb=n+1; Aug=[A b]; % forward elimination for k = 1:n-1 % partial pivoting [big,i]=max(abs(Aug(k:n,k))); ipr=i+k-1; if ipr~=k Aug([k,ipr],:)=Aug([ipr,k],:); end for i = k+1:n factor=Aug(i,k)/Aug(k,k); Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb); end disp(Aug); end % back substitution x=zeros(n,1); x(n)=Aug(n,nb)/Aug(n,n); for i = n-1:-1:1 x(i)=(Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i); end Partial pivoting (switch rows) [big,i] = max(x) largest element in {x} index of the largest element

46 >> format short >> x=GaussPivot0(A,b) Aug = 1 0 2 3 1 -1 2 2 -3 -1 0 1 1 4 2 6 2 2 4 1 big = 6 i = 4 ipr = 4 Aug = 6 2 2 4 1 -1 2 2 -3 -1 0 1 1 4 2 1 0 2 3 1 factor = -0.1667 Aug = 6.0000 2.0000 2.0000 4.0000 1.0000 0 2.3333 2.3333 -2.3333 -0.8333 0 1.0000 1.0000 4.0000 2.0000 1.0000 0 2.0000 3.0000 1.0000 factor = 0 Aug = 6.0000 2.0000 2.0000 4.0000 1.0000 0 2.3333 2.3333 -2.3333 -0.8333 0 1.0000 1.0000 4.0000 2.0000 1.0000 0 2.0000 3.0000 1.0000 factor = 0.1667 Aug = 6.0000 2.0000 2.0000 4.0000 1.0000 0 2.3333 2.3333 -2.3333 -0.8333 0 1.0000 1.0000 4.0000 2.0000 0 -0.3333 1.6667 2.3333 0.8333 Aug = [A b] Interchange rows 1 and 4 Find the first pivot element and its index Eliminate first column No need to interchange

47 big = 2.3333 i = 1 ipr = 2 factor = 0.4286 Aug = 6.0000 2.0000 2.0000 4.0000 1.0000 0 2.3333 2.3333 -2.3333 -0.8333 0 0 0 5.0000 2.3571 0 -0.3333 1.6667 2.3333 0.8333 factor = -0.1429 Aug = 6.0000 2.0000 2.0000 4.0000 1.0000 0 2.3333 2.3333 -2.3333 -0.8333 0 0 0 5.0000 2.3571 0 0 2.0000 2.0000 0.7143 big = 2 i = 2 ipr = 4 Aug = 6.0000 2.0000 2.0000 4.0000 1.0000 0 2.3333 2.3333 -2.3333 -0.8333 0 0 2.0000 2.0000 0.7143 0 0 0 5.0000 2.3571 factor = 0 Aug = 6.0000 2.0000 2.0000 4.0000 1.0000 0 2.3333 2.3333 -2.3333 -0.8333 0 0 2.0000 2.0000 0.7143 0 0 0 5.0000 2.3571 x = 0 0.4714 x = 0 -0.1143 0.4714 x = 0 0.2286 -0.1143 0.4714 x = -0.1857 0.2286 -0.1143 0.4714 Back substitution Second pivot element and index Third pivot element and index Eliminate second column Eliminate third column Interchange rows 3 and 4 No need to interchange Save factors f ij for LU decomposition

48 Banded Matrix 47 HBW: Half band width

49 Banded Matrix 48 a i,j = 0 if j > i + HB or j < i - HB HB: Half bandwidth B: Bandwidth B = 2*HB + 1 In this example HB = 1 & B = 3

50 Tridiagonal Matrix ( 三對角線系統 )  Only three nonzero elements in each equation (3n instead of n 2 elements)  Subdiagonal, diagonal, superdiagonal  Solve by Gauss elimination 49

51 Tridiagonal Matrix :例  Special case of banded matrix with bandwidth = 3  Save storage, 3  n instead of n  n 50

52 Tridiagonal Matrix  Forward elimination  Back substitution 51 Use factor = e k / f k  1 to eliminate subdiagonal element Apply the same matrix operations to right hand side

53 Tridiagonal Matrix: Hand Calculations 52 (a) Forward elimination (b) Back substitution

54 53 function x = Tridiag(e,f,g,r) % x = Tridiag(e,f,g,r): % Tridiagonal system solver. % input: % e = subdiagonal vector % f = diagonal vector % g = superdiagonal vector % r = right hand side vector % output: % x = solution vector n=length(f); % forward elimination for k = 2:n factor = e(k)/f(k-1); f(k) = f(k) - factor*g(k-1); r(k) = r(k) - factor*r(k-1); end % back substitution x(n) = r(n)/f(n); for k =n-1:-1:1 x(k) = (r(k)-g(k)*x(k+1))/f(k); end

55 » [e,f,g,r] = example e = 0 -2.0000 4.0000 -0.5000 1.5000 -3.0000 f = 1.0000 6.0000 9.0000 3.2500 1.7500 13.0000 g = -2.0000 4.0000 -0.5000 1.5000 -3.0000 0 r = -3.0000 22.0000 35.5000 -7.7500 4.0000 -33.0000 » x = Tridiag (e, f, g, r) x = 1 2 3 -1 -2 -3 function [e,f,g,r] = example e=[ 0 -2 4 -0.5 1.5 -3]; f=[ 1 6 9 3.25 1.75 13]; g=[-2 4 -0.5 1.5 -3 0]; r=[-3 22 35.5 -7.75 4 -33]; Note: e(1) = 0 and g(n) = 0 Tridiagonal Matrix: 運算例

56 55 補充: Big-O

57 56 Concept of Order of Growth  We say f A (n)=30n+8 is (at most) order n, or O(n)  It is, at most, roughly proportional to n  f B (n)=n 2 +1 is order n 2, or O(n 2 )  It is (at most) roughly proportional to n 2  Any function whose exact (tightest) order is O(n 2 ) is faster- growing than any O(n) function  Later we will introduce Θ for expressing exact order

58 57 Definition: O(g) (Asymptotic Upper Bound)  “f is at most order g”, or “f is O(g)”, or “f = O(g)” all just mean that f  O(g) Let g be any function R  R.  Define “ at most order g ”, written O(g), to be: {f:R  R |  c,k:  x>k: f(x)  cg(x)}  “ Beyond some point k, function f is at most a constant c times g (i.e., proportional to g) ”

59 58 About the Definition O(g)

60 59 Big-O 範例 (1/3)

61 60 Big-O 範例 (2/3)

62 61 Big-O 範例 (3/3)  Show that 30n+8 is O(n)  To show  c,k:  n > k: 30n+8  cn Let c = 31, k = 8. Assume n > k = 8. Then cn = 31n = 30n + n > 30n+8, so 30n+8 < cn  Show that n 2 +1 is O(n 2 )  To show  c,k:  n > k: n 2 +1  cn 2 Let c = 2, k = 1. Assume n > 1. Then cn 2 = 2n 2 = n 2 +n 2 > n 2 +1, or n 2 +1< cn 2

63 62  Note 30n+8 isn ’ t less than n anywhere (n>0)  It isn ’ t even less than 31n everywhere  But it is less than 31n everywhere to the right of n=8 n>k=8  Big-O Example Increasing n  Value of function  n 30n+8 cn = 31n 30n+8  O(n)

64 63

65 64 應知的定理

66 65

67 66 定理應用範例

68 67 Summary  For any g:R  R, “ at most order g ”, O(g)  {f:R  R |  c,k  x > k |f(x)|  |cg(x)| }  Often, we deal only with positive functions and can ignore absolute value symbols  “ f  O(g) ” often written “ f is O(g) ” or “ f = O(g) ”  The latter form is an instance of a more general convention...

69 68   (g)  {f:R  R |  c,k  x > k, |f(x)| > |cg(x)|}  Remarks  O(.) gives worst-case guarantees (good news), while  gives a lower bound (bad news)  的定義 (Asymptotic Lower Bound)

70 69 Definition:  (g), Exactly Order g (Asymptotic Tight Bound)  If f  O(g) and g  O(f ), then we say “ g and f are of the same order ” or “ f is (exactly) order g ” and write f  (g)  Another equivalent definition:  (g)  {f:R  R |  c 1 c 2 k > 0  x > k: |c 1 g(x)|  |f(x)|  |c 2 g(x)| }  “ Everywhere beyond some point k, f(x) lies in between two multiples of g(x) ”

71 70


Download ppt "1 Gauss Elimination Small Matrices  For small numbers of equations, solvable by hand  Graphical  Cramer's rule  Elimination 2."

Similar presentations


Ads by Google