 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.

Slides:



Advertisements
Similar presentations
Elementary Linear Algebra Anton & Rorres, 9th Edition
Advertisements

Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Scientific Computing Linear Systems – Gaussian Elimination.
Linear Systems of Equations
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
Systems of Linear Equations
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Linear Systems Pivoting in Gaussian Elim. CSE 541 Roger Crawfis.
1.2 Row Reduction and Echelon Forms
Linear Equations in Linear Algebra
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Lecture 11 - LU Decomposition
Solving Systems of Linear Equations Part Pivot a Matrix 2. Gaussian Elimination Method 3. Infinitely Many Solutions 4. Inconsistent System 5. Geometric.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
Matrices and Systems of Equations
Chapter 2 Matrices Definition of a matrix.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
ECE 552 Numerical Circuit Analysis
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Systems of Linear Equations
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
CHAPTER SIX Eigenvalues
Scientific Computing Linear Systems – LU Factorization.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
 Row and Reduced Row Echelon  Elementary Matrices.
Square n-by-n Matrix.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Chapter 3: The Fundamentals: Algorithms, the Integers, and Matrices
MATH 175: Numerical Analysis II Lecturer: Jomar Fajardo Rabajante 2 nd Sem AY IMSP, UPLB.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
8.1 Matrices & Systems of Equations
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Lecture 8 Matrix Inverse and LU Decomposition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Lesson 3 CSPP58001.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
Linear Systems Dinesh A.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
1 1.2 Linear Equations in Linear Algebra Row Reduction and Echelon Forms © 2016 Pearson Education, Ltd.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Sinwook Lee Digital Media Lab. HYU.  Linear Equations  To gain the appropriate solution, 1..n of equations are necessary.  The num of equations < n.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
1 Numerical Methods Solution of Systems of Linear Equations.
PIVOTING The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex.
Spring Dr. Jehad Al Dallal
Linear Algebra Lecture 4.
Chapter 2 Determinants Basil Hamed
Numerical Analysis Lecture14.
Lecture 8 Matrix Inverse and LU Decomposition
Ax = b Methods for Solution of the System of Equations:
Presentation transcript:

 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit arithmetic with rounding. Solution: The exact solutions are Apply Gaussian elimination: Trouble maker Small pivot element may cause trouble.

Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies  Partial Pivoting (or maximal column pivoting) -- Determine the smallest p  k such that and interchange the pth and the kth rows. Example: Solve the linear system using 4-digit arithmetic with rounding. Small relative to the entries in its row. 2/17

Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies  Scaled Partial Pivoting (or scaled-column pivoting) -- Place the element in the pivot position that is largest relative to the entries in its row. Step 1: Define a scale factor s i for each row as Step 2: Determine the smallest p  k such that and interchange the pth and the kth rows. Note: The scaled factors s i must be computed only once, otherwise this method would be too slow. The scaled factors s i must be computed only once, otherwise this method would be too slow. 3/17

Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies  Complete Pivoting (or maximal pivoting) -- Search all the entries a ij for i, j = k, …, n, to find the entry with the largest magnitude. Both row and column interchanges are performed to bring this entry to the pivot position. Result of solving 3 by 3 linear systems with direct Gaussian elimination Result of solving 3 by 3 linear systems with complete pivoting 4/17

Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies  Amount of Computation  Partial Pivoting: Requires about O(n 2 ) additional comparisons.  Scaled Partial Pivoting: Requires about O(n 2 ) additional comparisons and O(n 2 ) divisions.  Complete Pivoting: Requires about O(n 3 /3) additional comparisons. Note: If the new scaled factors were determined each time a row interchange decision was to be made, then the scaled partial pivoting would add O(n 3 /3) comparisons in addition to the O(n 2 ) divisions. If the new scaled factors were determined each time a row interchange decision was to be made, then the scaled partial pivoting would add O(n 3 /3) comparisons in addition to the O(n 2 ) divisions. 5/17

Chapter 6 Direct Methods for Solving Linear Systems -- Matrix Factorization  6.5 Matrix Factorization  Matrix Form of Gaussian Elimination Step 1: Let L 1 =, then Step n  1: where L k = 6/17

Chapter 6 Direct Methods for Solving Linear Systems -- Matrix Factorization L unitary lower-triangular matrix Let U = LU factorization of A Hey hasn’t GE given me enough headache? Why do I have to know its matrix form??! When you have to solve the system for different with a fixed A. Could you be more specific, please? Factorize A first, then for every you only have to solve two simple triangular systems and. 7/17

Chapter 6 Direct Methods for Solving Linear Systems -- Matrix Factorization Theorem: If Gaussian elimination can be performed on the linear system Ax = b without row interchanges, then the matrix A can be factored into the product of a lower-triangular matrix L and an upper-triangular matrix U. If L has to be unitary, then the factorization is unique. Proof (for uniqueness): If the factorization is NOT unique, then there exist L 1, U 1, L 2 and U 2 such that A = L 1 U 1 = L 2 U 2. Upper-triangular Lower-triangular with diagonal entries 1 Note: The factorization with U being unitary is called the Crout’s factorization. Crout’s factorization can be obtained by the LU factorization of A t. That is, find A t = LU, then A = U t L t is the Crout’s factorization of A. 8/17

Chapter 6 Direct Methods for Solving Linear Systems -- Matrix Factorization  Doolittle Factorization – a compact form of LU factorization Repeated computations. What a waste! 9/17

Chapter 6 Direct Methods for Solving Linear Systems -- Matrix Factorization Fix i : For j = i, i+1, …, n we have l ii = 1 a Interchange i and j. For j = i, i+1, …, n we have b Algorithm: Doolittle Factorization Step 1: u 1j = a 1j ; l j1 = a j1 / u 11 ; ( j = 1, …, n ) Step 2: compute and for i = 2, …, n  1; Step 3: ab HW: p.397 #7 10/17

Chapter 6 Direct Methods for Solving Linear Systems -- Special Types of Matrices  6.6 Special Types of Matrices  Strictly Diagonally Dominant Matrix for each i = 1, …, n. Theorem: A strictly diagonally dominant matrix A is nonsingular. Moreover, Gaussian elimination can be performed without row or column interchanges, and the computations will be stable with respect to the growth of roundoff errors. Proof:  A is nonsingular – proof by contradiction.  Gaussian elimination can be performed without row or column interchanges – proof by induction: each of the matrices A (2), A (3), …, A (n) generated by the Gaussian elimination is strictly diagonally dominant.  Omitted. 11/17

Chapter 6 Direct Methods for Solving Linear Systems -- Special Types of Matrices  Choleski’s Method for Positive Definite Matrix  Review: A is positive definite Definition: A matrix A is positive definite if it is symmetric and if x t A x > 0 for every n-dimensional vector x  0. A  1 is positive definite as well, and a ii > 0. max | a ij |  max | a kk |; ( a ij ) 2 < a ii a jj for each i  j. Each of A’s leading principal submatrices A k has a positive determinant. HW: Read the proofs on p /17

Chapter 6 Direct Methods for Solving Linear Systems -- Special Types of Matrices Consider the LU factorization of a positive definite A: U = u ij = u 11 u ij / u ii u 22 u nn A is symmetric Let D 1/2 = is still a lower-triangular matrix Why is u ii > 0? Since det(A k ) > 0 A is positive definite A can be factored in the form LDL t, where L is a unitary lower-triangular matrix and D is a diagonal matrix with positive diagonal entries. A can be factored in the form LL t, where L is lower- triangular with nonzero diagonal entries. 13/17

Chapter 6 Direct Methods for Solving Linear Systems -- Special Types of Matrices Algorithm: Choleski’s Method To factor the symmetric positive definite n  n matrix A into LL t, where L is lower-triangular. Input: the dimension n; entries a ij for 1  i, j  n of A. Output: the entries l ij for 1  j  i and 1  i  n of L. Step 1 Set ; Step 2 For j = 2, …, n, set ; Step 3 For i = 2, …, n  1, do steps 4 and 5 Step 4 Set ; Step 5 For j = i+1, …, n, set ; Step 6 Set ; Step 7 Output ( l ij for j = 1, …, i and i = 1, …, n ); STOP. LDL t is faster, but must be modified to solve Ax = b. 14/17

Chapter 6 Direct Methods for Solving Linear Systems -- Special Types of Matrices  Crout Reduction for Tridiagonal Linear System Step 1: Find the Crout factorization of A Step 2: Solve Step 3: Solve The process cannot continue if  i = 0. Hence not all the tridiagonal linear system can be solved by this method. 15/17

Chapter 6 Direct Methods for Solving Linear Systems -- Special Types of Matrices Theorem: If A is tridiagonal and diagonally dominant. Moreover, if Then A is nonsingular, and the linear system can be solved. Note: If A is strictly diagonally dominant, then it is not necessary to have all the entries a i, b i, and c i being nonzero.  If A is strictly diagonally dominant, then it is not necessary to have all the entries a i, b i, and c i being nonzero. The method is stable in a sense that all the values obtained during the process will be bounded by the values of the original entries.  The method is stable in a sense that all the values obtained during the process will be bounded by the values of the original entries. The amount of computation is O(n).  The amount of computation is O(n). HW: p.412 #17 16/17

Chapter 6 Direct Methods for Solving Linear Systems -- Special Types of Matrices Lab 03. There is No Free Lunch Time Limit: 1 second; Points: 4 One day, CYJJ found an interesting piece of commercial from newspaper: the Cyber-restaurant was offering a kind of "Lunch Special" which was said that one could "buy one get two for free". That is, if you buy one of the dishes on their menu, denoted by d i with price p i, you may get the two neighboring dishes d i  1 and d i+1 for free! If you pick up d 1, then you may get d 2 and the last one d n for free, and if you choose the last one d n, you may get d n  1 and d 1 for free. However, after investigation CYJJ realized that there was no free lunch at all. The price p i of the i-th dish was actually calculated by adding up twice the cost c i of the dish and half of the costs of the two "free" dishes. Now given all the prices on the menu, you are asked to help CYJJ find the cost of each of the dishes. 17/17