E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 7.

Slides:



Advertisements
Similar presentations
4.4.1 Generalised Row Echelon Form
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Chapter 4 Euclidean Vector Spaces
E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 4.
E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 6.
E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 5.
Engineering Computation
E. T. S. I. Caminos, Canales y Puertos1 Lecture 2 Engineering Computation.
E. T. S. I. Caminos, Canales y Puertos1 Lecture 1 Engineering Computation.
Numerical Solution of Linear Equations
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
CS-110 Computational Engineering Part 3
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Engineering Computation
Chapter: 3c System of Linear Equations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Simultaneous Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Numerical Algorithms Matrix multiplication
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebraic Equations
Special Matrices and Gauss-Siedel
1 Systems of Linear Equations Iterative Methods. 2 B. Iterative Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
1 Systems of Linear Equations Iterative Methods. 2 B. Direct Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
Special Matrices and Gauss-Siedel
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Algorithm for Gauss elimination 1) first eliminate for each eq. j, j=1 to n-1 for all eq.s k greater than j a) multiply eq. j by a kj /a jj b) subtract.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 10 LU Decomposition and Matrix.
Systems of Linear Equations
1 Systems of Linear Equations Gauss-Jordan Elimination and LU Decomposition.
1 Systems of Linear Equations Error Analysis and System Condition.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Linear Algebraic Equations ~ Gauss Elimination Chapter.
Major: All Engineering Majors Author(s): Autar Kaw
Chapter 8 Objectives Understanding matrix notation.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Linear.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Square n-by-n Matrix.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Elliptic PDEs and the Finite Difference Method
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3- Chapter 12 Iterative Methods.
Numerical Methods.
Linear Systems – Iterative methods
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Linear Systems Dinesh A.
Linear Systems Numerical Methods. 2 Jacobi Iterative Method Choose an initial guess (i.e. all zeros) and Iterate until the equality is satisfied. No guarantee.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Chapter: 3c System of Linear Equations
Simultaneous Linear Equations
Spring Dr. Jehad Al Dallal
Gauss-Siedel Method.
Numerical Analysis Lecture12.
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
Numerical Analysis Lecture14.
Numerical Analysis Lecture10.
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Pivoting, Perturbation Analysis, Scaling and Equilibration
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 7

E. T. S. I. Caminos, Canales y Puertos2 System of Equations Errors in Solutions to Systems of Linear Equations Objective: Solve [A]{x} = {b} Problem: Round-off errors may accumulate and even be exaggerated by the solution procedure. Errors are often exaggerated if the system is ill-conditioned Possible remedies to minimize this effect: 1. Partial or complete pivoting 2. Work in double precision 3. Transform the problem into an equivalent system of linear equations by scaling or equilibrating Errors in Solutions to Systems of Linear Equations

E. T. S. I. Caminos, Canales y Puertos3 Ill-conditioning A system of equations is singular if det|A| = 0 If a system of equations is nearly singular it is ill-conditioned. Systems which are ill-conditioned are extremely sensitive to small changes in coefficients of [A] and {b}. These systems are inherently sensitive to round-off errors. Question: Can we develop a means for detecting these situations? Errors in Solutions to Systems of Linear Equations

E. T. S. I. Caminos, Canales y Puertos4 Ill-conditioning of [A]{x} = {b}: Consider the graphical interpretation for a 2-equation system: We can plot the two linear equations on a graph of x 1 vs. x 2. x1x1 x2x2 b 2 /a 22 b 1 /a 11 b 1 /a 12 a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 b 2 /a 21 Errors in Solutions to Systems of Linear Equations

E. T. S. I. Caminos, Canales y Puertos5 Ill-conditioning of [A]{x} = {b}: Consider the graphical interpretation for a 2-equation system: We can plot the two linear equations on a graph of x 1 vs. x 2. Well-conditioned x1x1 x2x2 Uncertainty in x 2 Ill-conditioned x1x1 x2x2 Uncertainty in x 2 Errors in Solutions to Systems of Linear Equations

E. T. S. I. Caminos, Canales y Puertos6 Ways to detect ill-conditioning: 1. Calculate {x}, make small change in [A] or {b} and determine the change in the solution {x}. 2. After forward elimination, examine diagonal of upper triangular matrix. If a ii << a jj, i.e. there is a relatively small value on diagonal, then this may indicate ill- conditioning. 3. Compare {x} single with {x} double 4. Estimate the "condition number" for A. Substituting the calculated {x} into [A]{x} and checking this against {b} will not always work!!! Errors in Solutions to Systems of Linear Equations

E. T. S. I. Caminos, Canales y Puertos7 Ways to detect ill-conditioning: If det|A| = 0 the matrix is singular ==> the determinant may be an indicator of conditioning If det|A| is near zero is the matrix ill-conditioned? Consider: After scaling: ==> det|A| will provide an estimate of conditioning if it is normalized by the "magnitude" of the matrix. Errors in Solutions to Systems of Linear Equations

E. T. S. I. Caminos, Canales y Puertos8 Norms and the Condition Number We need a quantitative measure of ill-conditioning. This measure will then directly reflect the possible magnitude of round off effects. To do this we need to understand norms: Norm: Scalar measure of the magnitude of a matrix or vector ("how big" a vector is). Not to be confused with the dimension of a matrix. Norms

E. T. S. I. Caminos, Canales y Puertos9 Here are some vector norms for n x 1 vectors {x} with typical elements x i. Each is in the general form of a p norm defined by the general relationship: 1. Sum of the magnitudes: 2. Magnitude of largest element: (infinity norm) 3. Length or Euclidean norm: Vector Norms: Scalar measure of the magnitude of a vector Vector Norms

E. T. S. I. Caminos, Canales y Puertos10 Vector Norms Required Properties of vector norm: 1. ||x||  0 and ||x|| = 0 if and only if [x]=0 2 ||kx|| = k ||x|| where k is any positive scalar 3. ||x+y||  ||x|| + ||y|| Triangle Inequality For the Euclidean vector norm we also have 4. ||xy||  ||x|| ||y|| because the dot product or inner product property satisfies: ||xy|| = ||x||||y|| |cos(  )|  ||x|| ||y||. Norms

E. T. S. I. Caminos, Canales y Puertos11 1. Largest column sum: (column sum norm) 2. Largest row sum: (row sum norm) (infinity norm) Matrix Norms: Scalar measure of the magnitude of a matrix. Matrix norms corresponding to vector norms above are defined by the general relationship: Matrix Norms

E. T. S. I. Caminos, Canales y Puertos12 3. Spectral norm: ||A|| 2 = (µ max ) 1/2 where µ max is the largest eigenvalue of [A] T [A] If [A] is symmetric, (µ max ) 1/2 = max, is the largest eigenvalue of [A]. (Note: this is not the same as the Euclidean or Frobenius norm, seldom used: Matrix norms

E. T. S. I. Caminos, Canales y Puertos13 Matrix Norms For matrix norms to be useful we require that 0. || Ax ||  || A || ||x || General properties of any matrix norm: 1. || A ||  0 and || A || = 0 iff [A] = 0 2. || k A || = k || A || where k is any positive scalar 3. || A + B ||  || A || + || B || "Triangle Inequality" 4. || A B ||  || A || || B || Why are norms important?  Norms permit us to express the accuracy of the solution {x} in terms of || x ||  Norms allow us to bound the magnitude of the product [ A ] {x} and the associated errors. Matrix norms

E. T. S. I. Caminos, Canales y Puertos14 Forward and backward error analysis can estimate the effect of truncation and roundoff errors on the precision of a result. The two approaches are alternative views: 1.Forward (a priori) error analysis tries to trace the accumulation of error through each process of the algorithm, comparing the calculated and exact values at every stage. 2.Backward (a posteriori) error analysis views the final solution as the exact solution to a perturbed problem. One can consider how different the perturbed problem is from the original problem. Here we use the condition number of a matrix [A] to specify the amount by which relative errors in [A] and/or {b} due to input, truncation, and rounding can be amplified by the linear system in the computation of {x}. Error Analysis

E. T. S. I. Caminos, Canales y Puertos15 Backward Error Analysis of [A]{x} = {b} for errors in {b} Suppose the coefficients {b} are not precisely represented. What might be the effect on the calculated value for {x + dx}? Lemma: [A]{x} = {b} yields ||A|| ||x||  ||b|| or Now an error in {b} yields a corresponding error in {x}: [A ]{x + dx} = {b + db} [A]{x} + [A]{ dx} = {b} + {db} Subtracting [A]{x} = {b} yields: [A]{dx} = {db} ––> {dx} = [A] -1 {db} Error Analysis

E. T. S. I. Caminos, Canales y Puertos16 Define the condition number as k = cond [A]  ||A -1 || ||A||  1 If k  1 or k is small, the system is well-conditioned If k >> 1, system is ill conditioned. 1 = || I || = || A -1 A ||  || A -1 || || A || = k = Cond(A) Taking norms we have: And using the lemma: we then have : Backward Error Analysis of [A]{x} = {b} for errors in {b} Error Analysis

E. T. S. I. Caminos, Canales y Puertos17 Backward Error Analysis of [A]{x} = {b} for errors in [A] If the coefficients in [A] are not precisely represented, what might be effect on the calculated value of {x+ dx}? [A + dA ]{x + dx} = {b} [A]{x} + [A]{ dx} + [dA]{x+dx} = {b} Subtracting [A]{x} = {b} yields: [A]{ dx} = – [dA]{x+dx} or {dx} = – [A] -1 [dA] {x+dx} Taking norms and multiplying by || A || / || A || yields : Error Analysis

E. T. S. I. Caminos, Canales y Puertos18 Estimate of Loss of Significance: Consider the possible impact of errors [dA] on the precision of {x}. Error Analysis implies that if Or, taking log of both sides: s > p - log 10 (  ) log 10 (  ) is the loss in decimal precision; i.e., we start with p decimal figures and end-up with s decimal figures. It is not always necessary to find [A] -1 to estimate k = cond[A]. Instead, use an estimate based upon iteration of inverse matrix using LU decomposition.

E. T. S. I. Caminos, Canales y Puertos19 Impetus for Iterative Schemes: 1. May be more rapid if coefficient matrix is "sparse" 2. May be more economical with respect to memory 3. May also be applied to solve nonlinear systems Disadvantages: 1. May not converge or may converge slowly 2. Not appropriate for all systems Error bounds apply to solutions obtained by direct and iterative methods because they address the specification of [dA] and {db}. Iterative Solution Methods

E. T. S. I. Caminos, Canales y Puertos20 Basic Mechanics: Starting with: a 11 x 1 + a 12 x 2 + a 13 x a 1n x n =b 1 a 21 x 1 + a 22 x 2 + a 23 x a 2n x n =b 2 a 31 x 1 + a 32 x 2 + a 33 x a 3n x n =b 3: a n1 x 1 + a n2 x 2 + a n3 x a nn x n =b n Solve each equation for one variable: x 1 = [b 1 – (a 12 x 2 + a 13 x a 1n x n )} / a 11 x 2 = [b 2 – (a 21 x 1 + a 23 x a 2n x n )} / a 22 x 3 = [b 3 – (a 31 x 1 + a 32 x a 3n x n )} / a 33 : x n = [b n – (a n1 x 2 + a n2 x a n,n-1 x n-1 )} / a nn Iterative Solution Methods

E. T. S. I. Caminos, Canales y Puertos21  Start with initial estimate of {x} 0.  Substitute into the right-hand side of all the equations.  Generate new approximation {x} 1.  This is a multivariate one-point iteration: {x} j+1 = {g({x} j )}  Repeat process until the maximum number of iterations is reached or until: || x j+1 – x j ||  d + e || x j+1 || Iterative Solution Methods

E. T. S. I. Caminos, Canales y Puertos22 To solve[A]{x} = {b} Separate [A] into:[A] = [L o ] + [D] + [U o ] [D] = diagonal (a ii ) [L o ] = lower triangular with 0's on diagonal [U o ]= upper triangular with 0's on diagonal Rewrite system: [A]{x} = ( [L o ] + [D] + [U o ] ){x} = {b} [D]{x} + ( [L o ] + [U o ] ){x} = {b} Iterate: [D]{x} j+1 = {b} – ( [L o ]+[U o ] ) {x} j {x} j+1 = [D] -1 {b} – [D] -1 ( [L o ]+[U o ] ) {x} j Iterations converge if: || [D] -1 ( [L o ] + [U o ] ) || < 1 (sufficient if equations are diagonally dominant) Convergence

E. T. S. I. Caminos, Canales y Puertos23 Iterative Solution Methods – the Jacobi Method

E. T. S. I. Caminos, Canales y Puertos24 In most cases using the newest values within the right-hand side equations will provide better estimates of the next value. If this is done, then we are using the Gauss-Seidel Method: ( [L o ]+[D] ){x} j+1 = {b} – [U o ] {x} j or explicitly: If this is done, then we are using the Gauss-Seidel Method Iterative Solution Methods -- Gauss-Seidel

E. T. S. I. Caminos, Canales y Puertos25 If either method is going to converge, Gauss-Seidel will converge faster than Jacobi. Why use Jacobi at all? Because you can separate the n-equations into n independent tasks, it is very well suited computers with parallel processors. Iterative Solution Methods -- Gauss-Seidel

E. T. S. I. Caminos, Canales y Puertos26 Rewrite given system:[A]{x} = { [B] + [E] } {x} = {b} where [B] is diagonal, or triangular so we can solve [B]{y} = {g} quickly. Thus, [B] {x} j+1 = {b}– [E] {x} j which is effectively:{x} j+1 = [B] -1 ({b} – [E] {x} j ) True solution {x}c satisfies:{x} c = [B] -1 ({b} – [E] {x} c ) Subtracting yields:{x} c – {x} j+1 = – [B] -1 [E] [{x} c – {x} j ] So ||{x} c – {x} j+1 ||  ||[B] -1 [E]|| ||{x} c – {x} j || Iterations converge linearly if || [B] -1 [E] || || ([D] + [L o ]) -1 [U o ] || < 1 For Gauss-Seidel => || [D] -1 ([L o ] + [U o ]) || < 1 For Jacobi Convergence of Iterative Solution Methods

E. T. S. I. Caminos, Canales y Puertos27 Iterative methods will not converge for all systems of equations, nor for all possible rearrangements. If the system is diagonally dominant, i.e., | a ii | > | a ij | where i  j then with all < 1.0, i.e., small slopes. Convergence of Iterative Solution Methods

E. T. S. I. Caminos, Canales y Puertos28 A sufficient condition for convergence exists: Notes: 1. If the above does not hold, still may converge. 2. This looks similar to infinity norm of [A] Convergence of Iterative Solution Methods

E. T. S. I. Caminos, Canales y Puertos29 Relaxation Schemes: where 0.0 <  2.0 (Usually the value of l is close to 1) Underrelaxation ( 0.0 <  < 1.0 ) More weight is placed on the previous value. Often used to: - make non-convergent system convergent or - to expedite convergence by damping out oscillations. Overrelaxation ( 1.0 <  2.0 ) More weight is placed on the new value. Assumes that the new value is heading in right direction, and hence pushes new value close to true solution. The choice of is highly problem-dependent and is empirical, so relaxation is usually only used for often repeated calculations of a particular class. Improving Rate of Convergence of G-S Iteration

E. T. S. I. Caminos, Canales y Puertos30 We often need to solve [A]{x} = {b} where n = 1000's Description of a building or airframe, Finite-Difference approximations to PDE's. Most of A's elements will be zero; a finite-difference approximation to Laplace's equation will have five a ij  0 in each row of A. Direct method (Gaussian elimination) Requires n 3 /3 flops (say n = 5000; n 3 /3 = 4 x flops) Fills in many of n 2 -5n zero elements of A Iterative methods (Jacobi or Gauss-Seidel) Never store [A] (say n = 5000; [A] would need 4n 2 = 100 Mb) Only need to compute [A-B] {x}; and to solve [B]{x t+1} = {b} Why Iterative Solutions?

E. T. S. I. Caminos, Canales y Puertos31 Effort: Suppose [B] is diagonal, solving [B] {v} = {b}n flops Computing [A-B] x4n flops For m iterations 5mn flops For n = m = 5000, 5mn = 1.25x10 8 At worst O(n 2 ). Why Iterative Solutions?