The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.

Slides:



Advertisements
Similar presentations
Fill Reduction Algorithm Using Diagonal Markowitz Scheme with Local Symmetrization Patrick Amestoy ENSEEIHT-IRIT, France Xiaoye S. Li Esmond Ng Lawrence.
Advertisements

Lecture 3 Sparse Direct Method: Combinatorics Xiaoye Sherry Li Lawrence Berkeley National Laboratory, USA crd-legacy.lbl.gov/~xiaoye/G2S3/
22C:19 Discrete Structures Trees Spring 2014 Sukumar Ghosh.
Solving linear systems through nested dissection Noga Alon Tel Aviv University Raphael Yuster University of Haifa.
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
Graphs Graphs are the most general data structures we will study in this course. A graph is a more general version of connected nodes than the tree. Both.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Sparse Matrices in Matlab John R. Gilbert Xerox Palo Alto Research Center with Cleve Moler (MathWorks) and Rob Schreiber (HP Labs)
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Solution of linear system of equations
Symmetric Minimum Priority Ordering for Sparse Unsymmetric Factorization Patrick Amestoy ENSEEIHT-IRIT (Toulouse) Sherry Li LBNL/NERSC (Berkeley) Esmond.
1cs542g-term Sparse matrix data structure  Typically either Compressed Sparse Row (CSR) or Compressed Sparse Column (CSC) Informally “ia-ja” format.
Lecture 21: Parallel Algorithms
CS 290H: Sparse Matrix Algorithms
DAST 2005 Tirgul 11 (and more) sample questions. DAST 2005 Q.Let G = (V,E) be an undirected, connected graph with an edge weight function w : E→R. Let.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods Nonsymmetric systems Graph theoretic tools Sparse LU with partial pivoting Supernodal factorization.
Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and.
CS240A: Conjugate Gradients and the Model Problem.
1 Separator Theorems for Planar Graphs Presented by Shira Zucker.
The Evolution of a Sparse Partial Pivoting Algorithm John R. Gilbert with: Tim Davis, Jim Demmel, Stan Eisenstat, Laura Grigori, Stefan Larimore, Sherry.
15-853Page :Algorithms in the Real World Separators – Introduction – Applications.
CS 290H Lecture 17 Dulmage-Mendelsohn Theory
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
CS 290H Lecture 12 Column intersection graphs, Ordering for sparsity in LU with partial pivoting Read “Computing the block triangular form of a sparse.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
Fixed Parameter Complexity Algorithms and Networks.
Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any.
Symbolic sparse Gaussian elimination: A = LU
Approximating the Minimum Degree Spanning Tree to within One from the Optimal Degree R 陳建霖 R 宋彥朋 B 楊鈞羽 R 郭慶徵 R
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
CS 290H Lecture 5 Elimination trees Read GLN section 6.6 (next time I’ll assign 6.5 and 6.7) Homework 1 due Thursday 14 Oct by 3pm turnin file1.
CS 219: Sparse Matrix Algorithms
CSCI 115 Chapter 7 Trees. CSCI 115 §7.1 Trees §7.1 – Trees TREE –Let T be a relation on a set A. T is a tree if there exists a vertex v 0 in A s.t. there.
Week 11 - Monday.  What did we talk about last time?  Binomial theorem and Pascal's triangle  Conditional probability  Bayes’ theorem.
15-853Page :Algorithms in the Real World Planar Separators I & II – Definitions – Separators of Trees – Planar Separator Theorem.
Solution of Sparse Linear Systems
Lecture 4 Sparse Factorization: Data-flow Organization
CS240A: Conjugate Gradients and the Model Problem.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Administrivia: October 5, 2009 Homework 1 due Wednesday Reading in Davis: Skim section 6.1 (the fill bounds will make more sense next week) Read section.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
CS 290H 31 October and 2 November Support graph preconditioners Final projects: Read and present two related papers on a topic not covered in class Or,
CS 290H Lecture 15 GESP concluded Final presentations for survey projects next Tue and Thu 20-minute talk with at least 5 min for questions and discussion.
Linear Systems Dinesh A.
CS 290H Lecture 9 Left-looking LU with partial pivoting Read “A supernodal approach to sparse partial pivoting” (course reader #4), sections 1 through.
Data Structures and Algorithm Analysis Graph Algorithms Lecturer: Jing Liu Homepage:
Symmetric-pattern multifrontal factorization T(A) G(A)
Chapter 11. Chapter Summary  Introduction to trees (11.1)  Application of trees (11.2)  Tree traversal (11.3)  Spanning trees (11.4)
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Lecture 3 Sparse Direct Method: Combinatorics
CS 290N / 219: Sparse Matrix Algorithms
Solving Linear Systems Ax=b
Linear Equations.
CS 290H Administrivia: April 16, 2008
Parallel Algorithm Design using Spectral Graph Theory
The Landscape of Sparse Ax=b Solvers
On the effect of randomness on planted 3-coloring models
CS 290H Lecture 3 Fill: bounds and heuristics
Read GLN sections 6.1 through 6.4.
Nonsymmetric Gaussian elimination
Treewidth meets Planarity
Presentation transcript:

The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More General D

Column Cholesky Factorization for j = 1 : n L(j:n, j) = A(j:n, j); for k = 1 : j-1 % cmod(j,k) L(j:n, j) = L(j:n, j) – L(j, k) * L(j:n, k); end; % cdiv(j) L(j, j) = sqrt(L(j, j)); L(j+1:n, j) = L(j+1:n, j) / L(j, j); end; Column j of A becomes column j of L L L LTLT A j

Sparse Column Cholesky Factorization for j = 1 : n L(j:n, j) = A(j:n, j); for k < j with L(j, k) nonzero % sparse cmod(j,k) L(j:n, j) = L(j:n, j) – L(j, k) * L(j:n, k); end; % sparse cdiv(j) L(j, j) = sqrt(L(j, j)); L(j+1:n, j) = L(j+1:n, j) / L(j, j); end; Column j of A becomes column j of L L L LTLT A j

Graphs and Sparse Matrices : Cholesky factorization G(A) G + (A) [chordal] Symmetric Gaussian elimination: for j = 1 to n add edges between j’s higher-numbered neighbors Fill: new nonzeros in factor

Cholesky Graph Game Given an undirected graph G = G(A), Repeat: Choose a vertex v and mark it; Add edges between unmarked neighbors of v; Until every vertex is marked Goal: End up with as few edges as possible. Output: A labeling of the vertices with numbers 1 to n, corresponding to a symmetric permutation of matrix A.

Path lemma [Davis Thm 4.1] Let G = G(A) be the graph of a symmetric, positive definite matrix, with vertices 1, 2, …, n, and let G + = G + (A) be the filled graph. Then (v, w) is an edge of G + if and only if G contains a path from v to w of the form (v, x 1, x 2, …, x k, w) with x i < min(v, w) for each i. (This includes the possibility k = 0, in which case (v, w) is an edge of G and therefore of G +.)

Elimination Tree Cholesky factor G + (A)T(A) T(A) : parent(j) = min { i > j : (i, j) in G + (A) } parent(col j) = first nonzero row below diagonal in L T describes dependencies among columns of factor Can compute G + (A) easily from T Can compute T from G(A) in almost linear time

Facts about elimination trees If G(A) is connected, then T(A) is connected (it’s a tree, not a forest). If A(i, j) is nonzero and i > j, then i is an ancestor of j in T(A). If L(i, j) is nonzero, then i is an ancestor of j in T(A). [Davis Thm 4.4] T(A) is a depth-first spanning tree of G + (A). T(A) is the transitive reduction of the directed graph G(L T ).

Describing the nonzero structure of L in terms of G(A) and T(A) If (i, k) is an edge of G with i > k, then the edges of G + include: (i, k) ; (i, p(k)) ; (i, p(p(k))) ; (i, p(p(p(k))))... Let i > j. Then (i, j) is an edge of G + iff j is an ancestor in T of some k such that (i, k) is an edge of G. The nonzeros in row i of L are a “row subtree” of T. The nonzeros in col j of L are some of j’s ancestors in T. Just the ones adjacent in G to vertices in the subtree of T rooted at j.

Symbolic factorization: Computing G + (A) T and G give the nonzero structure of L either by rows or by columns. Row subtrees [Davis Fig 4.4] : Tr[i] is the subtree of T formed by the union of the tree paths from j to i, for all edges (i, j) of G with j < i. Tr[i] is rooted at vertex i. The vertices of Tr[i] are the nonzeros of row i of L. For j < i, (i, j) is an edge of G + iff j is a vertex of Tr[i]. Column unions [Davis Fig 4.10] : Column structures merge up the tree. struct(L(:, j)) = struct(A(j:n, j)) + union( struct(L(:,k)) | j = parent(k) in T ) For i > j, (i, j) is an edge of G + iff either (i, j) is an edge of G or (i, k) is an edge of G + for some child k of j in T. Running time is O(nnz(L)), which is best possible unless we just want the nonzero counts of the rows and columns of L

Complexity measures for sparse Cholesky Space: Measured by fill, which is nnz( G + (A) ) Number of off-diagonal nonzeros in Cholesky factor (need to store about n + nnz( G + (A) ) real numbers). Sum over vertices of G + (A) of (# of higher neighbors). Time: Measured by number of flops (multiplications, say) Sum over vertices of G + (A) of (# of higher neighbors) 2 Front size: Related to the amount of “fast memory” required Max over vertices of G + (A) of (# of higher neighbors).

Permutations for sparsity “I observed that most of the coefficients in our matrices were zero; i.e., the nonzeros were ‘sparse’ in the matrix, and that typically the triangular matrices associated with the forward and back solution provided by Gaussian elimination would remain sparse if pivot elements were chosen with care” - Harry Markowitz, describing the 1950s work on portfolio theory that won the 1990 Nobel Prize for Economics

Cholesky Graph Game Given an undirected graph G = G(A), Repeat: Choose a vertex v and mark it; Add edges between unmarked neighbors of v; Until every vertex is marked Goal: End up with as few edges as possible. Output: A labeling of the vertices with numbers 1 to n, corresponding to a symmetric permutation of matrix A.

The (2-dimensional) model problem Graph is a regular square grid with n = k 2 vertices. Corresponds to matrix for regular 2D finite difference mesh. Gives good intuition for behavior of sparse matrix algorithms on many 2-dimensional physical problems. There’s also a 3-dimensional model problem. n 1/2

Permutations of the 2-D model problem Theorem 1: With the natural permutation, the n-vertex model problem has exactly O(n 3/2 ) fill. Theorem 2: With a nested dissection permutation, the n-vertex model problem has exactly O(n log n) fill. Theorem 3: With any permutation, the n-vertex model problem has at least O(n log n) fill. See course notes for proofs.

Nested dissection ordering A separator in a graph G is a set S of vertices whose removal leaves at least two connected components. A nested dissection ordering for an n-vertex graph G numbers its vertices from 1 to n as follows: Find a separator S, whose removal leaves connected components T 1, T 2, …, T k Number the vertices of S from n-|S|+1 to n. Recursively, number the vertices of each component: T 1 from 1 to |T 1 |, T 2 from |T 1 |+1 to |T 1 |+|T 2 |, etc. If a component is small enough, number it arbitrarily. It all boils down to finding good separators!

Separators in theory If G is a planar graph with n vertices, there exists a set of at most sqrt(6n) vertices whose removal leaves no connected component with more than 2n/3 vertices. (“Planar graphs have sqrt(n)-separators.”) “Well-shaped” finite element meshes in 3 dimensions have n 2/3 - separators. Also some other classes of graphs – trees, graphs of bounded genus, chordal graphs, bounded-excluded- minor graphs, … Mostly these theorems come with efficient algorithms, but they aren’t used much.

Separators in practice Graph partitioning heuristics have been an active research area for many years, often motivated by partitioning for parallel computation. See CS 240A. Some techniques: Spectral partitioning (uses eigenvectors of Laplacian matrix of graph) Geometric partitioning (for meshes with specified vertex coordinates) Iterative-swapping (Kernighan-Lin, Fiduccia-Matheysses) Breadth-first search (fast but dated) Many popular modern codes (e.g. Metis, Chaco) use multilevel iterative swapping Matlab graph partitioning toolbox: see course web page

Permutations of general 2D and 3D problems Theorem 4: With a nested dissection permutation, any planar graph with n vertices has at most O(n log n) fill. Theorem 5: With a nested dissection permutation, any 3D finite element mesh (with a technical condition on element shapes) has at most O(n 4/3 ) fill. See course notes for references to proofs.

Heuristic fill-reducing matrix permutations Nested dissection: Find a separator, number it last, proceed recursively Theory: approx optimal separators => approx optimal fill and flop count Practice: often wins for very large problems Minimum degree: Eliminate row/col with fewest nzs, add fill, repeat Hard to implement efficiently – current champion is “Approximate Minimum Degree” [Amestoy, Davis, Duff] Theory: can be suboptimal even on 2D model problem Practice: often wins for medium-sized problems Banded orderings (Reverse Cuthill-McKee, Sloan,...): Try to keep all nonzeros close to the diagonal Theory, practice: often wins for “long, thin” problems The best modern general-purpose orderings are ND/MD hybrids.

Fill-reducing permutations in Matlab Symmetric approximate minimum degree: p = amd(A); symmetric permutation: chol(A(p,p)) often sparser than chol(A) Symmetric nested dissection: not built into Matlab several versions in meshpart toolbox (course web page references) Nonsymmetric approximate minimum degree: p = colamd(A); column permutation: lu(A(:,p)) often sparser than lu(A) also for QR factorization Reverse Cuthill-McKee p = symrcm(A); A(p,p) often has smaller bandwidth than A similar to Sparspak RCM

Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any well- shaped finite element mesh

1.Preorder: replace A by PAP T and b by Pb Independent of numerics 2.Symbolic Factorization: build static data structure Elimination tree Nonzero counts Supernodes Nonzero structure of L 3.Numeric Factorization: A = LL T Static data structure Supernodes use BLAS3 to reduce memory traffic 4.Triangular Solves: solve Ly = b, then L T x = y Sparse Cholesky factorization to solve Ax = b