Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Block LU Factorization Lecture 24 MA471 Fall 2003.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 18 Sit in your groups.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Systems of Equations
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Numerical Algorithms Matrix multiplication
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Solution of linear system of equations
1cs542g-term Notes  Assignment 1 is out (questions?)
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
1 Systems of Linear Equations Iterative Methods. 2 B. Iterative Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
1 Systems of Linear Equations Iterative Methods. 2 B. Direct Methods 1.Jacobi method and Gauss Seidel 2.Relaxation method for iterative methods.
Special Matrices and Gauss-Siedel
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Thomas algorithm to solve tridiagonal matrices
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
CSE 245: Computer Aided Circuit Simulation and Verification
Iterative Methods for Solving Linear Systems of Equations ( part of the course given for the 2 nd grade at BGU, ME )
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Solving Scale Linear Systems (Example system) Lecture 13 MA/CS 471 Fall 2003.
Computer Engineering Majors Authors: Autar Kaw
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Iterative Methods for Solving Linear Systems Leo Magallon & Morgan Ulloa.
Solving Scale Linear Systems (Example system continued) Lecture 14 MA/CS 471 Fall 2003.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Scientific Computing Partial Differential Equations Poisson Equation.
MA/CS 375 Fall MA/CS 375 Fall 2003 Lecture 19.
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Linear Systems Iterative Solutions CSE 541 Roger Crawfis.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 21.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Elliptic PDEs and the Finite Difference Method
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Linear Systems – Iterative methods
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
Lecture 21 MA471 Fall 03. Recall Jacobi Smoothing We recall that the relaxed Jacobi scheme: Smooths out the highest frequency modes fastest.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
MA/CS 471 Lecture 15, Fall 2002 Introduction to Graph Partitioning.
The Power Method for Finding
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
2/26/ Gauss-Siedel Method Electrical Engineering Majors Authors: Autar Kaw
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
3/6/ Gauss-Siedel Method Major: All Engineering Majors Author: دکتر ابوالفضل رنجبر نوعی
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
Iterative Solution Methods
Ch7: Hopfield Neural Model
Programming assignment #1. Solutions and Discussion
Solving Linear Systems Ax=b
Lecture 19 MA471 Fall 2003.
CSCE569 Parallel Computing
Numerical Analysis Lecture13.
Programming assignment #1 Solving an elliptic PDE using finite differences Numerical Methods for PDEs Spring 2007 Jim E. Jones.
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Chapter 2 A Survey of Simple Methods and Tools
Ax = b Methods for Solution of the System of Equations:
Presentation transcript:

Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003

Some matlab scripts to construct various types of random circuit loop matrices are available at the class website:

The Sparsity Pattern of a Loop Circuit Matrix for a Random Circuit (with 1000 closed loops) b

gridcircuit.m An array of current loops with random resistors (resistors on circuit boundary not shown)

b Matrix Due To Random Grid Circuit Note the large amount of structure in the loop circuit matrix

The Limits of Factorization In the last class/lab we began to see that there are limits to the size of linear system solvable with matrix factorization based methods. The storage cost for the loop current matrix built on a Cartesian circuit stored as a sparse NxN matrix is ~ 5*N However, using LU (or Cholesky) and symmetric RCM the storage requirement is b*N which is typically at least an order of magnitude larger than the storage required for the loop matrix itself. Also – memory spent on storing the matrices is memory we could have used for extra cells…

Alternative Approach We are going to pursue iterative methods which will satisfy the equation in an approximate way without an excessive amount of extra storage. There are a number of different classes of iterative methods, today we will discuss an example from the class of stationary methods.

Jacobi Iteration Example system: Initial guess: Algorithm: i.e. for the I’th equation compute the I’th degree of freedom using the values computed from the previous iteration.

Cleaning The Scheme Up

Couple of Iterations 1 st iteration: 2 nd iteration:

Pseudo-Code For Jacobi Method 1) Build A, b 2) Build modified A with diagonal zero  Q 3) Set initial guess x=0 4) do{ a) compute: b) compute error: c) update x: }while err>tol

Matlab To The Rescue

Running The Jacobi Iteration Script I ran the script with the stopping tolerance (tol) set to 1e-8: Note that the error is of the order of 1e-8 !!! i.e. the Jacobi iterative method computes an approximate solution to the system of equations!.

Try The Larger Systems First I made the script into a function which takes the rhs vector, system matrix and stopping tolerance as input. It returns the approximate solution and the residual history.

Run Time! First I built a circuit matrix using the gridcircuit.m script (with N=100) Then I built a random source vector for the right hand side. Then I called the jacobisolve routine with: [x,residuals] = jacobisolve(mat,b,1e-4);

Convergence History Stopping criterion..

Increasing System Size Notice how the number of iterations required grew with N

Ok – so I kind of broke the rules We set up the Jacobi iteration but did not ask the question “when will the Jacobi iteration converge and how fast?” Definition: A matrix A is diagonally dominant if Theorem: If the matrix A is diagonally dominant then Ax=b has a unique solution x and the Jacobi iteration produces a sequence which converges to x for any initial guess Informally: The “more diagonally dominant” a matrix is the faster it will converge… this holds some of the time.

Going Back To The Circuit System For the gridcircuit code I set up the resistors so that all the internal circuit loops shared all their resistors with other loops. the current balance law => for internal cells the total cell resistance = sum of of resistances shared with neighboring cells… i.e. the row sums of the internal cell loop currents is zero i.e. the matrix is weakly diagonally dominant – and does not exactly satisfy the convergence criterion for the Jacobi iterative scheme.

Slight Modification of the Circuit I added an additional random resistor to each cell (i.e. increased the diagonal entry and did not change the off-diagonal entries). This modification ensures that the matrix is now strictly diagonally dominant.

Convergence history for diagonally dominant system – notice dramatic reduction in iteration count

Gauss-Seidel Example system: Initial guess: Algorithm: i.e. for the I’th equation compute the I’th degree of freedom using the values computed from the previous iteration and the new values just computed

Cleaning The Scheme Up

First Iteration 1 st iteration: As soon as the 1 level values are computed, we use them in the next equations..

Theorem First This Time!. So we should first ask the questions 1)When will the Gauss-Seidel iteration converge? 2)How fast will it converge? Definition: A matrix is said to be positive definite if Theorem: If A is symmetric and positive definite, then the Gauss-Seidel iteration converges for any initial guess for x Unoficially: Gauss-Seidel will converge twice as fast in some cases as Jacobi.

Gauss-Seidel Algorithm We iterate:

Comparing Jacobi and Gauss-Seidel Same problem – Gauss-Seidel takes almost half the work

The Catch Ok – so it looks like one would always use Gauss-Seidel rather than Jacobi iteration. However, let us consider the parallel implementation.

Volunteer To Design A Parallel Version 1)Decide which cells go where 2)Decide how much information each process needs to keep locally 3)Decide what information needs to be communicated among processes. 4)Are there any intrinsic bottlenecks in Gauss-Seidel or Jacobi?. 5)Can we devise a hybrid version of GS which avoids the bottlenecks?.

Project 3 (serial part) In C: 1)Build a sparse matrix based on one of the random circuits (design the storage for it yourself or use someone else’s sparse storage structure or class) 2)Write a sparse matrix times vector routine. 3)Implement the Jacobi iterative scheme