Scientific Computing Linear Systems – LU Factorization.

Slides:



Advertisements
Similar presentations
Method If an nxn matrix A has an LU-factorization, then the solution of AX = b can be determined by a Forward substitution followed by a Back substitution.
Advertisements

Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
4/26/ LU Decomposition Civil Engineering Majors Authors: Autar Kaw Transforming.
Linear Systems LU Factorization CSE 541 Roger Crawfis.
Scientific Computing Linear Systems – Gaussian Elimination.
Linear Systems of Equations
Chapter 8 Numerical Technique 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
1.5 Elementary Matrices and a Method for Finding
LU Factorization LU-factorization Matrix factorization Forward substitution Back substitution.
Major: All Engineering Majors Authors: Autar Kaw
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebraic Equations
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 10 LU Decomposition and Matrix.
Matrices and Systems of Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 10 LU Decomposition and Matrix.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 18 LU Decomposition and Matrix Inversion.
Major: All Engineering Majors Author(s): Autar Kaw
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
Chap. 2 Matrices 2.1 Operations with Matrices
Using LU Decomposition to Optimize the modconcen.m Routine Matt Tornowske April 1, 2002.
MATH 250 Linear Equations and Matrices
Numerical Computation Lecture 7: Finding Inverses: Gauss-Jordan United International College.
Gaussian elimination Jordi Cortadella Department of Computer Science.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 21.
Lecture 8 Matrix Inverse and LU Decomposition
Co. Chapter 3 Determinants Linear Algebra. Ch03_2 Let A be an n  n matrix and c be a nonzero scalar. (a)If then |B| = …….. (b)If then |B| = …..... (c)If.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1/18/ LU Decomposition Industrial Engineering Majors Authors: Autar Kaw Transforming.
Linear Systems Dinesh A.
3/1/ LU Decomposition Computer Engineering Majors Authors: Autar Kaw Transforming.
2 2.5 © 2016 Pearson Education, Ltd. Matrix Algebra MATRIX FACTORIZATIONS.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Numerical Methods.  LU Decomposition is another method to solve a set of simultaneous linear equations  For most non-singular matrix [A] that one could.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
LU Decomposition ● In Gauss elimination; Forward elimination Backward substitution Major computational effort Low computational effort can be used for.
Chapter 8 Numerical Technique
Simultaneous Linear Equations
Spring Dr. Jehad Al Dallal
Linear Equations.
Linear Algebra Lecture 15.
Chapter 10: Solving Linear Systems of Equations
Chapter 10 and Matrix Inversion LU Decomposition
Chemical Engineering Majors Authors: Autar Kaw
Elementary Matrix Methid For find Inverse
Major: All Engineering Majors Author(s): Autar Kaw
Numerical Computation and Optimization
Part 3 Chapter 10 LU Factorization
Numerical Analysis Lecture10.
Major: All Engineering Majors Authors: Autar Kaw
LU Decomposition.
Lecture 8 Matrix Inverse and LU Decomposition
Linear Systems of Equations: solution and applications
The Gauss Jordan method
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Scientific Computing Linear Systems – LU Factorization

LU Factorization Today we will show that the Gaussian Elimination method can be used to factor (or decompose) a square, invertible, matrix A into two parts, A = LU, where: – L is a lower triangular matrix with 1’s on the diagonal, and – U is an upper triangular matrix

LU Factorization A= LU We call this a LU Factorization of A.

LU Factorization Example Consider the augmented matrix for Ax=b: Gaussian Elimination (with no row swaps) yields:

LU Factorization Example Let’s Consider the first step in the process – that of creating 0’s in first column: This can be achieved by multiplying the original matrix [A|b] by M 1

LU Factorization Example Thus, M 1 [A|b] = In the next step of Gaussian Elimination, we pivot on a 22 to get: We can view this as multiplying M 1 [A|b] by

LU Factorization Example Thus, M 2 M 1 [A|b] = In the last step of Gaussian Elimination, we pivot on a 33 to get: We can view this as multiplying M 2 M 1 [A|b] by

LU Factorization Example Thus, M 3 M 2 M 1 [A|b] = So, the original problem of solving Ax=b can now be thought of as solving M 3 M 2 M 1 Ax = M 3 M 2 M 1 b

LU Factorization Example Note that M 3 M 2 M 1 A is upper triangular: Thus, We hope that L is lower triangular

LU Factorization Example Recall: In Exercise 3.6 you proved that the inverse to the matrix Is the matrix

LU Factorization Example Thus, L = M 1 -1 M 2 -1 M 3 -1

LU Factorization Example So, A can be factored as A=LU

Solving Ax=b using LU Factorization To solve Ax=b we can do the following: – Factor A = LU – Solve the two equations: Lz = b Ux = z – This is the same as L(Ux) = b or Ax=b.

LU Factorization Example Our example was Ax=b: Factor:

LU Factorization Example Solve Lz=b using forward substitution: We get:

LU Factorization Example Solve Ux=z using back substitution: We get the solution vector for x: (Note typo in Pav, page 36 bottom of page “z” should be “x”)

LU Factorization Theorem Theorem 3.3. If A is an nxn matrix, and Gaussian Elimination does not encounter a zero pivot (no row swaps), then the algorithm described in the example above generates a LU factorization of A, where L is a lower triangular matrix (with 1’s on the diagonal), and U is an upper triangular matrix. Proof: (omitted)

LU Factorization Matlab Function (1 of 2) function [ L U ] = lu_gauss(A) % This function computes the LU factorization of A % Assumes A is not singular and that Gauss Elimination requires no row swaps [n,m] = size(A); % n = #rows, m = # columns if n ~= m; error('A is not a square matrix'); end for k = 1:n-1 % for each row (except last) if A(k,k) == 0, error('Null diagonal element'); end for i = k+1:n % for row i below row k m = A(i,k)/A(k,k); % m = scalar for row i A(i,k) = m; % Put scalar for row i in (i,k) position in A. % We do this to store L values. Since the lower % triangular part of A gets zeroed out in GE, we % can use it as a storage place for values of L.

LU Factorization Matlab Function (1 of 2) for j = k+1:n % Subtract m*row k from row i -> row i % We only need to do this for columns k+1 to n % since the values below A(k,k) will be zero. A(i,j) = A(i,j) - m*A(k,j); end end % At this point, A should be a matrix where the upper triangular % part of A is the matrix U and the rest of A below the diagonal % is L (but missing the 1's on the diagonal). L = eye(n) + tril(A, -1); % eye(n) is a matrix with 1's on diagonal U = triu(A); end

LU Factorization Matlab >> A=[ ; ; ; ] A = >> [l u] = lu_gauss(A);

LU Factorization Matlab >> >> l l = >> u u =

LU Factorization Matlab Solve Function Class Discussion: What must we do to modify the function lu_gauss so that we can compute the solution to Ax=b using the LU factorization? New function: lu_solve on handouts. Discuss and consider Matlab output ->

LU Factorization Matlab >>> b = [ ]’; >>> [x z] = lu_solve(A, b); >> z z = >> x x =

Linear Algebra Review Now that we have covered some examples of solving linear systems, there are several important questions: – How many numerical operations does Gaussian Elimination take? That is, how fast is it? – Why do we use LU factorization? In what cases does it speed up calculation? – How close is our solution to the exact solution? – Are there faster solution methods?

Operation Count for Gaussian Elimination How many floating point operations (+-*/) are used by the Gaussian Elimination algorithm? Definition: Flop = floating point operation. We will consider a division to be equivalent to a multiplication, and a subtraction equivalent to an addition. Thus, 2/3 = 2*(1/3) will be considered a multiplication. 2-3 = 2 + (-3) will be considered an addition.

Operation Count for Gaussian Elimination In Gaussian Elimination we use row operations to reduce to

Operation Count for Gaussian Elimination Consider the number of flops needed to zero out the entries below the first pivot a 11.

Operation Count for Gaussian Elimination First a multiplier is computed for each row below the first row. This requires (n-1) multiplies. m = A(i,k)/A(k,k); Then in each row below row 1 the algorithm performs n multiplies and n adds. (A(i,j) = A(i,j) - m*A(k,j);) Thus, there is a total of (n-1) + (n-1)*2*n flops for this step of Gaussian Elimination. For k=1 algorithm uses 2n 2 –n -1 flops

Operation Count for Gaussian Elimination For k =2, we zero out the column below a 22. There are (n-2) rows below this pivot, so this takes 2(n-1) 2 –(n-1) -1 flops. For k =3, we would have 2(n-2) 2 –(n-2) -1 flops, and so on. To complete Gaussian Elimination, it will take I n flops, where

Operation Count for Gaussian Elimination Now, So, I n = (2/6)n(n+1)(2n+1) – (1/2)n(n+1) – n = [(1/3)(2n+1)-(1/2)]*n(n+1) – n = [(2/3)n – (1/6)] * n(n+1) - n = (2/3)n 3 + (lower power terms in n) Thus, the number of flops for Gaussian Elimination is O(n 3 ).

Operation Count for LU Factorization In the algorithm for LU Factorization, we only do the calculations described above to compute L and U. This is because we save the multipliers (m) and store them to create L. So, the number of flops to create L and U is O(n 3 ).

Operation Count for using LU to solve Ax = b Once we have factored A into LU, we do the following to solve Ax = b: Solve the two equations: Lz = b Ux = z How many flops are needed to do this?

Operation Count for using LU to solve Ax = b To solve Lz=b we use forward substitution z = z 1 = b 1, so we use 0 flops to find z 1. z 2 = b 2 – l 21 * z 1, so we use 2 flops to find z 2. z 3 = b 3 – l 31 * z 1 – l 32 * z 2, so we use 4 flops to find z 2, and so on.

Operation Count for using LU to solve Ax = b To solve Lz=b we use forward substitution z = Totally, … + 2*(n-1)= 2*(1+2+…+(n-1)) = 2*(1/2)*(n-1)(n) = n 2 – n. So, the number of flops for forward substitution is O(n 2 ).

Operation Count for using LU to solve Ax = b To solve Ux=z we use backward substitution A similar analysis to that of forward substitution shows that the number of flops for backward substitution is also O(n 2 ). Thus, the number of flops for using LU to solve Ax=b is O(n 2 ).

Summary of Two Methods Gaussian Elimination requires O(n 3 ) flops to solve the linear system Ax = b. To factor A = LU requires O(n 3 ) flops Once we have factored A = LU, then, using L and U to solve Ax = b requires O(n 2 ) flops. Suppose we have to solve Ax = b for a given matrix A, but for many different b vectors. What is the most efficient way to do this?

Summary of Two Methods Suppose we have to solve Ax = b for a given matrix A, but for many different b vectors. What is the most efficient way to do this? Most efficient is to use LU decomposition and then solve Lz = b Ux = z Computing LU is O(n 3 ), but every time we solve Lz=b, Ux=z we use O(n 2 ) flops!