Download presentation
Presentation is loading. Please wait.
Published byJoel Lamb Modified over 9 years ago
1
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003
2
Some matlab scripts to construct various types of random circuit loop matrices are available at the class website:
3
The Sparsity Pattern of a Loop Circuit Matrix for a Random Circuit (with 1000 closed loops) b
4
gridcircuit.m An array of current loops with random resistors (resistors on circuit boundary not shown)
5
b Matrix Due To Random Grid Circuit Note the large amount of structure in the loop circuit matrix
6
The Limits of Factorization In the last class/lab we began to see that there are limits to the size of linear system solvable with matrix factorization based methods. The storage cost for the loop current matrix built on a Cartesian circuit stored as a sparse NxN matrix is ~ 5*N However, using LU (or Cholesky) and symmetric RCM the storage requirement is b*N which is typically at least an order of magnitude larger than the storage required for the loop matrix itself. Also – memory spent on storing the matrices is memory we could have used for extra cells…
7
Alternative Approach We are going to pursue iterative methods which will satisfy the equation in an approximate way without an excessive amount of extra storage. There are a number of different classes of iterative methods, today we will discuss an example from the class of stationary methods. http://www.netlib.org/linalg/html_templates/node9.html
8
Jacobi Iteration Example system: Initial guess: Algorithm: i.e. for the I’th equation compute the I’th degree of freedom using the values computed from the previous iteration.
9
Cleaning The Scheme Up
10
Couple of Iterations 1 st iteration: 2 nd iteration:
11
Pseudo-Code For Jacobi Method 1) Build A, b 2) Build modified A with diagonal zero Q 3) Set initial guess x=0 4) do{ a) compute: b) compute error: c) update x: }while err>tol
12
Matlab To The Rescue
13
Running The Jacobi Iteration Script I ran the script with the stopping tolerance (tol) set to 1e-8: Note that the error is of the order of 1e-8 !!! i.e. the Jacobi iterative method computes an approximate solution to the system of equations!.
14
Try The Larger Systems First I made the script into a function which takes the rhs vector, system matrix and stopping tolerance as input. It returns the approximate solution and the residual history.
15
Run Time! First I built a circuit matrix using the gridcircuit.m script (with N=100) Then I built a random source vector for the right hand side. Then I called the jacobisolve routine with: [x,residuals] = jacobisolve(mat,b,1e-4);
16
Convergence History Stopping criterion..
17
Increasing System Size Notice how the number of iterations required grew with N
18
Ok – so I kind of broke the rules We set up the Jacobi iteration but did not ask the question “when will the Jacobi iteration converge and how fast?” Definition: A matrix A is diagonally dominant if Theorem: If the matrix A is diagonally dominant then Ax=b has a unique solution x and the Jacobi iteration produces a sequence which converges to x for any initial guess Informally: The “more diagonally dominant” a matrix is the faster it will converge… this holds some of the time.
19
Going Back To The Circuit System For the gridcircuit code I set up the resistors so that all the internal circuit loops shared all their resistors with other loops. the current balance law => for internal cells the total cell resistance = sum of of resistances shared with neighboring cells… i.e. the row sums of the internal cell loop currents is zero i.e. the matrix is weakly diagonally dominant – and does not exactly satisfy the convergence criterion for the Jacobi iterative scheme.
20
Slight Modification of the Circuit I added an additional random resistor to each cell (i.e. increased the diagonal entry and did not change the off-diagonal entries). This modification ensures that the matrix is now strictly diagonally dominant.
21
Convergence history for diagonally dominant system – notice dramatic reduction in iteration count
22
Gauss-Seidel Example system: Initial guess: Algorithm: i.e. for the I’th equation compute the I’th degree of freedom using the values computed from the previous iteration and the new values just computed
23
Cleaning The Scheme Up
24
First Iteration 1 st iteration: As soon as the 1 level values are computed, we use them in the next equations..
25
Theorem First This Time!. So we should first ask the questions 1)When will the Gauss-Seidel iteration converge? 2)How fast will it converge? Definition: A matrix is said to be positive definite if Theorem: If A is symmetric and positive definite, then the Gauss-Seidel iteration converges for any initial guess for x Unoficially: Gauss-Seidel will converge twice as fast in some cases as Jacobi.
26
Gauss-Seidel Algorithm We iterate:
27
Comparing Jacobi and Gauss-Seidel Same problem – Gauss-Seidel takes almost half the work
28
The Catch Ok – so it looks like one would always use Gauss-Seidel rather than Jacobi iteration. However, let us consider the parallel implementation.
29
Volunteer To Design A Parallel Version 1)Decide which cells go where 2)Decide how much information each process needs to keep locally 3)Decide what information needs to be communicated among processes. 4)Are there any intrinsic bottlenecks in Gauss-Seidel or Jacobi?. 5)Can we devise a hybrid version of GS which avoids the bottlenecks?.
30
Project 3 (serial part) In C: 1)Build a sparse matrix based on one of the random circuits (design the storage for it yourself or use someone else’s sparse storage structure or class) 2)Write a sparse matrix times vector routine. 3)Implement the Jacobi iterative scheme
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.