ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.

Slides:



Advertisements
Similar presentations
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Advertisements

Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Linear Systems LU Factorization CSE 541 Roger Crawfis.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
LU Factorization LU-factorization Matrix factorization Forward substitution Back substitution.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Chapter 9.1 = LU Decomposition MATH 264 Linear Algebra.
1.2 Row Reduction and Echelon Forms
Linear Equations in Linear Algebra
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebraic Equations
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
Matrices and Systems of Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Linear Systems and Matrices
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Math for CSLecture 31 LU Decomposition Pivoting Diagonalization Gram-Shcmidt Orthogonalization Lecture 3.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 18 LU Decomposition and Matrix Inversion.
Chapter 5 Determinants.
Table of Contents Solving Systems of Linear Equations - Gaussian Elimination The method of solving a linear system of equations by Gaussian Elimination.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Scientific Computing Linear Systems – LU Factorization.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
MATH 250 Linear Equations and Matrices
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Lecture 8 Matrix Inverse and LU Decomposition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Chapter 9 Gauss Elimination The Islamic University of Gaza
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Systems Dinesh A.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Numerical Methods.  LU Decomposition is another method to solve a set of simultaneous linear equations  For most non-singular matrix [A] that one could.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
1 Numerical Methods Solution of Systems of Linear Equations.
LU Decomposition ● In Gauss elimination; Forward elimination Backward substitution Major computational effort Low computational effort can be used for.
Linear Equations in Linear Algebra
MAT 322: LINEAR ALGEBRA.
Spring Dr. Jehad Al Dallal
Linear Equations.
Linear Algebra Lecture 15.
Chapter 10: Solving Linear Systems of Equations
Elementary Matrix Methid For find Inverse
Major: All Engineering Majors Author(s): Autar Kaw
Numerical Analysis Lecture14.
Numerical Computation and Optimization
Numerical Analysis Lecture10.
ECEN 615 Methods of Electric Power Systems Analysis
Linear Equations in Linear Algebra
Lecture 8 Matrix Inverse and LU Decomposition
ECEN 615 Methods of Electric Power Systems Analysis
Presentation transcript:

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign 9/9/ Lecture 5: Gaussian Elimination

Solving Linear Systems

Gaussian Elimination The best known and most widely used method for solving linear systems of algebraic equations is attributed to Gauss Gaussian elimination avoids having to explicitly determine the inverse of A Gaussian elimination leverages the fact that scaling a linear equation does not change its solution, nor does adding on linear equation to another

Gaussian Elimination, cont. Gaussian elimination is the elementary procedure in which we use the first equation to eliminate the first variable from the last n-1 equations, then we use the new second equation to eliminate the second variable from the last n-2 equations, and so on After performing n-1 such eliminations we end up with a triangular system which is easily solved in a backward direction 4

Example 1 We need to solve for x in the system The three elimination steps are given on the next slides; for simplicity, we have appended the r.h.s. vector to the matrix First step is set the diagonal element of row 1 to 1 (i.e., normalize it)

Example 1, Step 1 Eliminate x 1 by subtracting row 1 from all the rows below it

Example 1, Step 2 Eliminate x 2 by subtracting row 2 from all the rows below it

Example 1, Step 3 Elimination of x 3 from row 3 and 4

Example 1 Solution Then, we solve for x by “going backwards”, i.e., using back substitution:

Triangular Decomposition In this example, we have: – triangularized the original matrix by Gaussian elimination using column elimination – then, we used back substitution to solve the triangularized system The following slides present a general scheme for triangular decomposition by Gaussian elimination The assumption is that A is a nonsingular matrix (hence its inverse exists) Gaussian elimination also requires the diagonals to be nonzero; this can be achieved through re-ordering of rows

Triangular Decomposition We form the matrix A a using A and b with and show the steps of the triangularization scheme

Triangular Decomposition, Step 1 Step 1: normalize the first equation

Triangular Decomposition, Step 2 Step 2: a) eliminate x 1 from row 2: Step 2: b) normalize the second equation 13

Triangular Decomposition, Step 2 At the end of step 2, matrix A a becomes

Step 3: a) eliminate x 1 and x 2 from row 3: Step 3: b) normalize the third equation: Triangular Decomposition, Step 3

At the end of step 3, matrix A a becomes

In general, we have for step k: a) eliminate from row k: where, the superscript m denotes the order of the derived system: for row k, is the value of the element in column j after the elements in columns 1,2,..., m have become 0; i.e., after eliminating the variables from the k-th equation Triangular Decomposition, Step k

At step k: b) normalize the k th equation: Triangular Decomposition, Step k

and proceed in this manner until we obtain a fully upper triangular matrix (the n th derived system): Triangular Decomposition: Upper Triangular Matrix

Triangular Decomposition Note that in the general scheme presented, unlike in the example, we triangularly decompose the system by eliminating row–wise rather than column–wise – In successive rows we eliminate (reduce to 0) each element to the left of the diagonal rather than those below the diagonal – Row-wise operations will work better for power system sparse matrices

Solving for x To compute x we perform back substitution

Upper Triangular Matrix The triangular decomposition scheme applied to the matrix A also results in an upper triangular matrix U with the elements The matrix U plays a key role in the so-termed LU decomposition/factorization of A

LU Decomposition Theorem Any nonsingular matrix A has the following factorization: where U is the upper triangular matrix given in the last slide (with 1’s on its diagonals) and L is a lower triangular matrix defined by Sketch proof on the board

One Corollary

LU Decomposition Application As a result of this theorem we can rewrite There exist multiple solution pairs since U could have non-unity diagonals Once A has been factored, we can solve for x by first solving for y, a process known as forward substitution, then solving for x using back substitution In the previous example we can think of L as a record of the forward operations preformed on b.

LU Composite Matrix We can store the L and U triangular factors into a composite n by n matrix denoted by [L\U] Hence

LU Composite Matrix, cont’d The diagonal terms of U are not stored since they are known to be 1’s by the steps of the Gaussian elimination Using the composite matrix [L\U], there is no need to include the vector b during the process of Gaussian elimination since it has all the required information to compute A -1 b for any b

LDU Decomposition In the previous case we required that the diagonals of U be unity, while there was no such restriction on the diagonals of L An alternative decomposition is where D is the diagonal part of L, and the lower triangular matrix is modified to require unity for the diagonals 28

Symmetric Matrix Factorization The LDU formulation is quite useful for the case of a symmetric matrix Hence only the upper triangular elements and the diagonal elements need to be stored, reducing storage by almost a factor of 2 (30)

Pivoting An immediate problem that can occur with Gaussian elimination is the issue of zeros on the diagonal; for example This problem can be solved by a process known as “pivoting,” which involves the interchange of either both rows and columns (full pivoting) or just the rows (partial pivoting) – Partial pivoting is much easier to implement, and actually can be shown to work quite well 30

Pivoting, cont. In the previous example the (partial) pivot would just be to interchange the two rows For LU decomposition, we need to keep track of the interchanged rows! Partial pivoting can be helpful in improving numerical stability even when the diagonals are not zero – When factoring row k interchange rows so the new diagonal is the largest element in column k for rows j >= k 31

LU Algorithm Without Pivoting Processing by row We will use the more common approach of having ones on the diagonals of L. Also in the common, diagonally dominant power system problems pivoting is not needed Below algorithm is in row form (useful with sparsity!) For i := 2 to n Do Begin // This is the row being processed For j := 1 to i-1 Do Begin // Rows subtracted from row i A[i,j] = A[i,j]/A[j,j] // This is the scaling For k := j+1 to n Do Begin // Go through each column in i A[i,k] = A[i,k] - A[i,j]*A[j,k] End; 32

LU Example Starting matrix First row is unchanged; start with i=2 Result with i=2, j=1; done with row 2 33

LU Example, cont. Result with i=3, j=1; Result with i=3, j=2; done with row 3; done! 34

LU Example, cont. Original matrix is used to hold L and U 35 With this approach the original A matrix has been replaced by the factored values !

Forward Substitution Forward substitution solves with values in b being over written (replaced by the y values) For i := 2 to n Do Begin // This is the row being processed For j := 1 to i-1 Do Begin b[i] = b[i] - A[i,j]*b[j] // This is just using the L matrix End; 36

Forward Substitution Example 37

Backward Substitution Backward substitution solves (with values of y contained in the b vector as a result of the forward substitution) For i := n to 1 Do Begin // This is the row being processed For j := i+1 to n Do Begin b[i] = b[i] - A[i,j]*b[j] // This is just using the U matrix End; b[i] = b[i]/A[i,i] // The A[i,i] values are ≠ 0 if it is nonsingular End 38

Backward Substitution Example 39

Computational Complexity Computational complexity indicates how the number of numerical operations scales with the size of the problem Computational complexity is expressed using the “Big O” notation; assume a problem of size n – Adding the number of elements in a vector is O(n) – Adding two n by n full matrices is O(n 2 ) – Multiplying two n by n full matrices is O(n 3 ) – Inverting an n by n full matrix, or doing Gaussian elimination is O(n 3 ) – Solving the traveling salesman problem by brute-force search is O(n!) 40

Computational Complexity Knowing the computational complexity of a problem can help to determine whether it can be solved (at least using a particular method) – Scaling factors do not affect the computation complexity an algorithm that takes n 3 /2 operations has the same computational complexity of one the takes n 3 /10 operations (though obviously the second one is faster!) With O(n 3 ) factoring a full matrix becomes computationally intractable quickly! – A 100 by 100 matrix takes a million operations (give or take) – A 1000 by 1000 matrix takes a billion operations – A 10,000 by 10,000 matrix takes a trillion operations! 41