Download presentation
Presentation is loading. Please wait.
Published byDorothée St-Gelais Modified over 5 years ago
1
Iterative Methods and Combinatorial Preconditioners
4/29/2019
2
This talk is not about…
4
Solving Symmetric Diagonally-Dominant Systems By Preconditioning
Credits Solving Symmetric Diagonally-Dominant Systems By Preconditioning Bruce Maggs, Gary Miller, Ojas Parekh, R. Ravi, mw Combinatorial Preconditioners for Large, Sparse, Symmetric, Diagonally-Dominant Linear Systems Keith Gremban (CMU PhD 1996)
5
A b x = Linear Systems n by n matrix n by 1 vector
known unknown A useful way to do matrix algebra in your head: Matrix-vector multiplication = Linear combination of matrix columns
6
Matrix-Vector Multiplication
@ 1 2 3 A a b c = + Using BTAT = (AB)T, xTA should also be interpreted as a linear combination of the rows of A.
7
Strassen had a faster method to find A-1
How to Solve? A x = b Find A-1 Guess x repeatedly until we guess a solution Gaussian Elimination Strassen had a faster method to find A-1
8
Large Linear Systems in The Real World
9
Circuit Voltage Problem
Given a resistive network and the net current flow at each terminal, find the voltage at each node. 2A 2A A node with an external connection is a terminal. Otherwise it’s a junction. 3S 2S 1S Conductance is the reciprocal of resistance. It’s unit is siemens (S). 1S 4A
10
Kirchoff’s Law of Current
At each node, net current flowing at each node = 0. Consider v1. We have which after regrouping yields 2A 2A 2 + 3 ( v 1 ) = ; 3S v2 2S v1 v4 3 ( v 1 2 ) + = : 1S 1S I=VC 4A v3
11
Summing Up 2A 2A 3S v2 2S v1 v4 1S 1S 4A v3
12
Solving 2A 2A 3S 2S 2V 2V 3V 1S 1S 4A 0V
13
Did I say “LARGE”? Imagine this being the power grid of America. 2A 2A
v2 2S v1 v4 1S 1S 4A v3
14
Laplacians Given a weighted, undirected graph G = (V, E), we can represent it as a Laplacian matrix. 3 v2 2 v1 v4 1 1 v3
15
Laplacians Laplacians have many interesting properties, such as
Diagonals ¸ 0 denotes total incident weights Off-diagonals < 0 denotes individual edge weights Row sum = 0 Symmetric 3 v2 2 v1 v4 1 1 v3
16
Net Current Flow Lemma Suppose an n by n matrix A is the Laplacian of a resistive network G with n nodes. If y is the n-vector specifying the voltage at each node of G, then Ay is the n-vector representing the net current flow at each node.
17
Power Dissipation Lemma
Suppose an n by n matrix A is the Laplacian of a resistive network G with n nodes. If y is the n-vector specifying the voltage at each node of G, then yTAy is the total power dissipated by G.
18
Sparsity Laplacians arising in practice are usually sparse.
The i-th row has (d+1) nonzeros if vi has d neighbors. 3 v2 2 v1 v4 1 1 v3
19
Sparse Matrix An n by n matrix is sparse when there are O(n) nonzeros.
A reasonably-sized power grid has way more junctions and each junction has only a couple of neighbors. 3 v2 2 v1 v4 1 1 v3
20
Had Gauss owned a supercomputer…
(Would he really work on Gaussian Elimination?)
21
A Model Problem Let G(x, y) and g(x, y) be continuous functions defined in R and S respectively, where R and S are respectively the region and the boundary of the unit square (as in the figure). R S 1
22
If g(x,y) = 0, this is called a Dirichlet boundary condition.
A Model Problem We seek a function u(x, y) that satisfies Poisson’s equation in R and the boundary condition in S. @ 2 u x + y = G ( ; ) u ( x ; y ) = g R S 1 If g(x,y) = 0, this is called a Dirichlet boundary condition.
23
Discretization Imagine a uniform grid with a small spacing h. h 1
24
Five-Point Difference
Replace the partial derivatives by difference quotients The Poisson’s equation now becomes @ 2 u = x s [ ( + h ; y ) ] @ 2 u = y s [ ( x ; + h ) ] 4 u ( x ; y ) + h = 2 G Exercise: Derive the 5-pt diff. eqt. from first principle (limit).
25
For each point in R The total number of equations is .
4 u ( x ; y ) + h = 2 G The total number of equations is Now write them in the matrix form, we’ve got one BIG linear system to solve! ( 1 h ) 2 4 u ( x ; y ) + h = 2 G
26
An Example Consider u3,1, we have which can be rearranged to 4 u ( x ;
y ) + h = 2 G Consider u3,1, we have which can be rearranged to 4 4 u ( 3 ; 1 ) 2 = G u32 u21 u31 u41 4 u ( 3 ; 1 ) 2 = G + u30 4
27
An Example Each row and column can have a maximum of 5 nonzeros. 4 u (
; y ) + h = 2 G Each row and column can have a maximum of 5 nonzeros.
28
Sparse Matrix Again Really, it’s rare to see large dense matrices arising from applications.
29
Laplacian??? I showed you a system that is not quite Laplacian.
We’ve got way too many boundary points in a 3x3 example.
30
Making It Laplacian We add a dummy variable and force it to zero. (How to force? Well, look at the rank of this matrix first…)
31
Sparse Matrix Representation
A simple scheme An array of columns, where each column Aj is a linked-list of tuples (i, x).
32
Solving Sparse Systems
Gaussian Elimination again? Let’s look at one elimination step.
33
Solving Sparse Systems
Gaussian Elimination introduces fill.
34
Solving Sparse Systems
Gaussian Elimination introduces fill.
35
Fill Of course it depends on the elimination order.
Finding an elimination order with minimal fill is hopeless Garey and Johnson-GT46, Yannakakis SIAM JADM 1981 O(log n) Approximation Sudipto Guha, FOCS 2000 Nested Graph Dissection and Approximation Algorithms (n log n) lower bound on fill (Maverick still has not dug up the paper…)
36
A b x = When Fill Matters… Find A-1
Guess x repeatedly until we guessed a solution Gaussian Elimination
37
Inverse Of Sparse Matrices
…are not necessarily sparse either! B B-1
38
A b x = And the winner is… Find A-1
Can we be so lucky? A x = b Find A-1 Guess x repeatedly until we guessed a solution Gaussian Elimination
39
Iterative Methods Checkpoint
How large linear system actually arise in practice Why Gaussian Elimination may not be the way to go
40
The Basic Idea Start off with a guess x (0).
Using x (i ) to compute x (i+1) until converge. We hope the process converges in a small number of iterations each iteration is efficient
41
The RF Method [Richardson, 1910]
Residual x ( i + 1 ) = A b Domain A Range x x(i) b Test x(i+1) Ax(i) Correct
42
Why should it converge at all?
x ( i + 1 ) = A b Domain Range x x(i) b Test x(i+1) Ax(i) Correct
43
It only converges when…
x ( i + 1 ) = A b Theorem A first-order stationary iterative method converges iff x ( i + 1 ) = G k ( G ) < 1 . (A) is the maximum absolute eigenvalue of A
44
Fate? A x = b Once we are given the system, we do not have any control on A and b. How do we guarantee even convergence?
45
Instead of dealing with A and b, we now deal with B-1A and B-1b.
Preconditioning B - 1 A x = b Instead of dealing with A and b, we now deal with B-1A and B-1b. The word “preconditioning” originated with Turing in 1948, but his idea was slightly different.
46
Preconditioned RF x = ¡ B A b
( i + 1 ) = B - A b Since we may precompute B-1b by solving By = b, each iteration is dominated by computing B-1Ax(i), which is a multiplication step Ax(i) and a direct-solve step Bz = Ax(i). Hence a preconditioned iterative method is in fact a hybrid.
47
The Art of Preconditioning
We have a lot of flexibility in choosing B. Solving Bz = Ax(i) must be fast B should approximate A well for a low iteration count I A Trivial What’s the point? B
48
Classics x = ¡ B A b Jacobi
( i + 1 ) = B - A b Jacobi Let D be the diagonal sub-matrix of A. Pick B = D. Gauss-Seidel Let L be the lower triangular part of A w/ zero diagonals Pick B = L + D.
49
“Combinatorial” We choose to measure how well B approximates A by comparing combinatorial properties of (the graphs represented by) A and B. Hence the term “Combinatorial Preconditioner”.
50
Questions?
51
Graphs as Matrices
52
Edge Vertex Incidence Matrix
Given an undirected graph G = (V, E), let be a |E| £ |V | matrix of {-1, 0, 1}. For each edge (u, v), set e,u to -1 and e,v to 1. Other entries are all zeros.
53
Edge Vertex Incidence Matrix
b c d e f g h i j a b c d e f g h i j
54
Weighted Graphs 3 6 2 1 8 9 4 5 a b c d e f g h i j Let W be an |E| £ |E| diagonal matrix where We,e is the weight of the edge e.
55
Laplacian The Laplacian of G is defined to be TW . 3 6 2 1 8 9 4 5 a
j The Laplacian of G is defined to be TW .
56
Properties of Laplacians
3 6 2 1 8 9 4 5 a b c d e f g h i j Let L = TW . “Prove by example”
57
Properties of Laplacians
3 6 2 1 8 9 4 5 a b c d e f g h i j Let L = TW . “Prove by example”
58
Properties of Laplacians
A matrix A is Positive SemiDefinite if Since L = TW , it’s easy to see that for all x 8 x ; T A : x T ( W ) = 1 2 :
59
Laplacian The Laplacian of G is defined to be TW . a b c d e f g h i
j The Laplacian of G is defined to be TW .
60
Graph Embedding A Primer
61
Graph Embedding Vertex in G Vertex in H Edge in G Path in H Guest
Host a b c d e f g h i j a b e c f g h i j d
62
Dilation For each edge e in G, define dil(e) to be the number of edges in its corresponding path in H. Guest Host a b c d e f g h i j a b e c f g h i j d
63
Congestion For each edge e in H, define cong(e) to be the number of embedding paths that uses e. Guest Host a b c d e f g h i j a b e c f g h i j d
64
Support Theory
65
Disclaimer The presentation to follow is only “essentially correct”.
66
Support Definition The support required by a matrix B for a matrix A, both n by n in size, is defined as ( A = B ) : m i n f 2 R j 8 x ; T g (A / B) Think of B supporting A at the bottom.
67
Support With Laplacians
Life is Good when the matrices are Laplacians. Remember the resistive circuit analogy?
68
Power Dissipation Lemma
Suppose an n by n matrix A is the Laplacian of a resistive network G with n nodes. If y is the n-vector specifying the voltage at each node of G, then yTAy is the total power dissipated by G.
69
Circuit-Speak Read this loud in circuit-speak:
( A = B ) : m i n f 2 R j 8 x ; T g Read this loud in circuit-speak: “The support for A by B is the minimum number so that for all possible voltage settings, copies of B burn at least as much energy as one copy of A.”
70
Congestion-Dilation Lemma
Given an embedding from G to H,
71
Transitivity Pop Quiz For Laplacians, prove this in circuit-speak. ¾ (
= C ) B Pop Quiz For Laplacians, prove this in circuit-speak.
72
Generalized Condition Number
Definition The generalized condition number of a pair of PSD matrices is A is Positive Semi-definite iff 8x, xTAx ¸ 0.
73
Preconditioned Conjugate Gradient
Solving the system Ax = b using PCG with preconditioner B requires at most iterations to find a solution such that Convergence rate is dependent on the actual iterative method used.
74
Support Trees
75
Information Flow ¡ A b x =
In many iterative methods, the only operation using A directly is to compute Ax(i) in each iteration. Imagine each node is an agent maintaining its value. The update formula specifies how each agent should update its value for round (i+1) given all the values in round i. x ( i + 1 ) = A b
76
The Problem With Multiplication
Only neighbors can “communicate” in a multiplication, which happens once per iteration. 2A 3S 4A 2S 1S v1 v2 v4 v3
77
Diameter As A Natural Lower Bound
In general, for a node to settle on its final value, it needs to “know” at least the initial values of the other nodes. 2A 3S 4A 2S 1S v1 v2 v4 v3 Diameter is the maximum shortest path distance between any pair of nodes.
78
Preconditioning As Shortcutting
x ( i + 1 ) = B - A b By picking B carefully, we can introduce shortcuts for faster communication. But is it easy to find shortcuts in a sparse graph to reduce its diameter? 2A 3S 4A 2S 1S v1 v2 v4 v3
79
Square Mesh Let’s pick the complete graph induced on all the mesh points. Mesh: O(n) edges Complete graph: O(n2) edges
80
Bang! So exactly how do we propose to solve a dense n by n system faster than a sparse one? B can have at most O(n) edges, i.e., sparse…
81
Support Tree Build a Steiner tree to introduce shortcuts!
If we pick a balanced tree, no nodes will be farther than O(log n) hops away. Need to specify weights on the tree edges 2A 3S 4A 2S 1S v2 v4 v3 v1
82
Mixing Speed The speed of communication is proportional to the corresponding coefficients on the paths between nodes. 2A 3S 4A 2S 1S v1 v2 v4 v3
83
Setting Weights The nodes should be able to talk at least as fast as they could without shortcuts. How about setting all the weights to 1? ? ? 2A 3S 4A 2S 1S v2 v4 v3 ? ? ? v1 ?
84
Recall PCG’s Convergence Rate
Solving the system Ax = b using PCG with preconditioner B requires at most iterations to find a solution such that
85
Size Matters How big is the preconditioner matrix B? (2n-1) by (2n-1)
The major contribution (among another) of our paper is deal with this disparity in size. 2A 3S 4A 2S 1S v2 v4 v3 v1
86
The Search Is Hard Finding the “right” preconditioner is really a tradeoff Solving Bz = Ax(i) must be fast B should approximate A well for a low iteration count It could very well be harder than we think.
87
How to Deal With Steiner Nodes?
88
The Trouble of Steiner Nodes
Computation Definitions, e.g., x ( i + 1 ) = B - A b ( B = A ) : m i n f 2 R j 8 x ; T g
89
Generalized Support Let where W is n by n.
= T U W Let where W is n by n. Then (B/A) is defined to be where y = -T -1Ux. m i n f 2 R j 8 x ; T A y B g
90
Circuit-Speak Read this loud in circuit-speak:
( B = A ) : m i n f 2 R j 8 x ; T y g Read this loud in circuit-speak: “The support for B by A is the minimum number so that for all possible voltage settings at the terminals, copies of A burn at least as much energy as one copy of B.”
91
Thomson’s Principle ( B = A ) : m i n f 2 R j 8 x ; T y g Fix the voltages at the terminals. The voltages at the junctions will be set such that the total power dissipation in the circuit is minimized. 2V 5V 1V 3V
92
Racke’s Decomposition Tree
93
Laminar Decomposition
A laminar decomposition naturally defines a tree. a b c d e f g h i j a b e c f g h i j d
94
Racke, FOCS 2002 Given a graph G of n nodes, there exists a laminar decomposition tree T with all the “right” properties as a preconditioner for G. Except his advisor didn’t tell him about this…
95
For More Details Our paper is available online
Our contributions Analysis of Racke’s decomposition tree as a preconditioner Provided tools for reasoning support between Laplacians of different dimensions
96
The Search Is NOT Over Future Directions
Racke’s tree can take exponential time to find Many recent improvements, but not quite there yet Insisting on balanced tree can hurt in many easy cases
97
Q&A
98
Color Fidelity Test
99
Junk Yard
100
How’s this related to separators?
Still needs to incorporate slides from the talk on Racke’s tree. A naïve separator tree won’t work.
101
Previous Work
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.