Presentation is loading. Please wait.

Presentation is loading. Please wait.

Solving Linear Systems Ax=b

Similar presentations


Presentation on theme: "Solving Linear Systems Ax=b"— Presentation transcript:

1 Solving Linear Systems Ax=b
7/3/2018

2 Linear Systems n by n matrix n by 1 vector A x = b known unknown

3 Linear Systems in The Real World

4 Circuit Voltage Problem
Given a resistive network and the net current flow at each terminal, find the voltage at each node. 2A 2A A node with an external connection is a terminal. Otherwise it’s a junction. 3S 2S 1S Conductance is the reciprocal of resistance. Its unit is siemens (S). 1S 4A

5 Kirchoff’s Law of Current
At each node, net current flow = 0. Consider v1. We have which after regrouping yields 2A 2A 2 + 3 ( v 1 ) = ; 3S v2 2S v1 v4 3 ( v 1 2 ) + = : 1S 1S I=VC 4A v3

6 Summing Up 2A 2A Here A represents conductances, v represents voltages at nodes, and Av is the net current flow at each node. 3S v2 2S v1 v4 1S 1S 4A v3

7 Solving 2A 2A Here A represents conductances, v represents voltages at nodes, and Av is the net current flow at each node. 3S 2S 2V 2V 3V 1S 1S 4A 0V

8 Sparse Matrix An n by n matrix is sparse when there are O(n) nonzeros.
A reasonably-sized power grid has way more junctions and each junction has only a couple of neighbors. 3 v2 2 v1 v4 1 1 v3

9 Sparse Matrix Representation
A simple scheme An array of columns, where each column Aj is a linked-list of tuples (i, x).

10 Modeling a Physical System
Let G(x, y) and g(x, y) be continuous functions defined in R and S respectively, where R and S are respectively the region and the boundary of the unit square (as in the figure). R S 1

11 Modeling a Physical System
We seek a function u(x, y) that satisfies Poisson’s equation in R and the boundary condition in S. @ 2 u x + y = G ( ; ) u ( x ; y ) = g R S 1 Example: g(x, y) specifies fixed temperature at each point on the boundary, G(x, y)=0, u(x, y) is the temperature at each interior point.

12 Example Temperature on three sides of a square plate is 0 and on one side is .

13 Discretization Imagine a uniform grid with a small spacing h. h 1

14 Finite Difference Method
Replace the partial derivatives by difference quotients Poisson’s equation now becomes @ 2 u = x s [ ( + h ; y ) ] @ 2 u = y s [ ( x ; + h ) ] 4 u ( x ; y ) + h = 2 G Exercise: Derive the 5-pt diff. eqt. from first principle (limit).

15 For each point in R The total number of equations is
4 u ( x ; y ) + h = 2 G The total number of equations is Now write them in the matrix form, we’ve got one BIG linear system to solve! ( 1 h ) 2 4 u ( x ; y ) + h = 2 G

16 An Example Consider u3,1, we have which can be rearranged as 4 u ( x ;
y ) + h = 2 G Consider u3,1, we have which can be rearranged as 4 4 u ( 3 ; 1 ) 2 = G u32 u21 u31 u41 4 u ( 3 ; 1 ) 2 = G + u30 4

17 An Example Each row and column can have a maximum of 5 nonzeros. 4 u (
; y ) + h = 2 G Each row and column can have a maximum of 5 nonzeros.

18 A b x = How to Solve? Find A-1 Direct Method: Gaussian Elimination
Iterative Method: Guess x repeatedly until we guess a solution

19 Inverse Of Sparse Matrices
…are not necessarily sparse! A A-1

20 Direct Methods

21 Solution by LU Decomposition

22 Transform A to U through Gaussian Elimination

23 L Specified by Multiples of pivots

24 Pivoting Creates Fill

25

26 Reordering rows and columns doesn’t change solution

27 … but might reduce fill

28 Graph Model of Matrix

29 Graph View of Partitioning

30 Disclaimer

31 Min-Degree Heuristic Alternatively, eliminate vertex with smallest fill.

32 Separators

33 Nested Dissection

34 Nested Dissection Example

35 Analysis of Nested Dissection

36 Results Concerning Fill
Finding an elimination order with minimal fill is hopeless Garey and Johnson-GT46, Yannakakis SIAM JADM 1981 O(log n) Approximation Sudipto Guha, FOCS 2000 Nested Graph Dissection and Approximation Algorithms (n log n) lower bound on fill (Maverick still has not dug up the paper…)

37 Iterative Methods

38 Algorithms Apply to Laplacians
Given a positively weighted, undirected graph G = (V, E), we can represent it as a Laplacian matrix. 3 v2 2 v1 v4 1 1 v3

39 Laplacians Laplacians have many interesting properties, such as
Diagonals ¸ 0 denotes total incident weights Off-diagonals < 0 denotes individual edge weights Row sum = 0, column sum = 0 Symmetric 3 v2 2 v1 v4 1 1 v3

40 Laplacian??? Earlier I showed you a system that is not quite Laplacian. Boundary points don’t have 4 neighbors

41 Making It Laplacian We add a dummy variable and force it to zero.

42 The Basic Idea Start off with a guess x (0).
Use x (i ) to compute x (i+1) until convergence. We hope the process converges in a small number of iterations each iteration is efficient

43 The RF Method [Richardson, 1910]
Residual x ( i + 1 ) = A b Domain A Range x x(i) b Test x(i+1) Ax(i) Correct Units are all wrong! x(i) is voltage, residual is current!

44 Why should it converge at all?
x ( i + 1 ) = A b Domain Range x x(i) b Test x(i+1) Ax(i) Correct

45 It only converges when…
x ( i + 1 ) = A b Theorem A first-order stationary iterative method converges iff x ( i + 1 ) = G k ( G ) < 1 . (A) is the maximum absolute eigenvalue of G

46 Fate? A x = b Once we are given the system, we do not have any control on A and b. How do we guarantee even convergence?

47 Instead of dealing with A and b, we now deal with B-1A and B-1b.
Preconditioning B - 1 A x = b Instead of dealing with A and b, we now deal with B-1A and B-1b. The word “preconditioning” originated with Turing in 1948, but his idea was slightly different.

48 Preconditioned RF x = ¡ B A b
( i + 1 ) = B - A b Since we may precompute B-1b by solving By = b, each iteration is dominated by computing B-1Ax(i), which is a multiplication step Ax(i) and a direct-solve step Bz = Ax(i). Hence a preconditioned iterative method is in fact a hybrid.

49 The Art of Preconditioning
We have a lot of flexibility in choosing B. Solving Bz = Ax(i) must be fast B should approximate A well for a low iteration count I A Trivial What’s the point? B

50 Classics x = ¡ B A b Jacobi
( i + 1 ) = B - A b Jacobi Let D be the diagonal sub-matrix of A. Pick B = D. Gauss-Seidel Let L be the lower triangular part of A w/ zero diagonals Pick B = L + D. Conjugate gradient Move in an orthogonal direction in each iteration


Download ppt "Solving Linear Systems Ax=b"

Similar presentations


Ads by Google