The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.

Slides:



Advertisements
Similar presentations
05/11/2005 Carnegie Mellon School of Computer Science Aladdin Lamps 05 Combinatorial and algebraic tools for multigrid Yiannis Koutis Computer Science.
Advertisements

CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
CSE 245: Computer Aided Circuit Simulation and Verification
Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
1 Numerical Solvers for BVPs By Dong Xu State Key Lab of CAD&CG, ZJU.
CS 290H 7 November Introduction to multigrid methods
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Numerical Algorithms Matrix multiplication
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Numerical Algorithms • Matrix multiplication
1cs542g-term Notes  Assignment 1 is out (due October 5)  Matrix storage: usually column-major.
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
CS240A: Conjugate Gradients and the Model Problem.
Iterative Solvers for Coupled Fluid-Solid Scattering Jan Mandel Work presentation Center for Aerospace Structures University of Colorado at Boulder October.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Conjugate gradients, sparse matrix-vector multiplication, graphs, and meshes Thanks to Aydin Buluc, Umit Catalyurek, Alan Edelman, and Kathy Yelick for.
Support-Graph Preconditioning John R. Gilbert MIT and U. C. Santa Barbara coauthors: Marshall Bern, Bruce Hendrickson, Nhat Nguyen, Sivan Toledo authors.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Scientific Computing Partial Differential Equations Poisson Equation.
CS 219: Sparse Matrix Algorithms
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Administrivia: May 20, 2013 Course project progress reports due Wednesday. Reading in Multigrid Tutorial: Chapters 3-4: Multigrid cycles and implementation.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
Elliptic PDEs and the Finite Difference Method
Discrete Algorithms & Math Department Preconditioning ‘03 Algebraic Tools for Analyzing Preconditioners Bruce Hendrickson Erik Boman Sandia National Labs.
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Parallel Solution of the Poisson Problem Using MPI
CS240A: Conjugate Gradients and the Model Problem.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Administrivia: October 5, 2009 Homework 1 due Wednesday Reading in Davis: Skim section 6.1 (the fill bounds will make more sense next week) Read section.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
CS 290H 31 October and 2 November Support graph preconditioners Final projects: Read and present two related papers on a topic not covered in class Or,
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Xing Cai University of Oslo
Model Problem: Solving Poisson’s equation for temperature
CS 290N / 219: Sparse Matrix Algorithms
A computational loop k k Integration Newton Iteration
Solving Linear Systems Ax=b
CS 290H Administrivia: April 16, 2008
CSE 245: Computer Aided Circuit Simulation and Verification
A robust preconditioner for the conjugate gradient method
Matrix Methods Summary
Conjugate Gradient Method
Numerical Linear Algebra
Introduction to Scientific Computing II
Administrivia: November 9, 2009
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
A computational loop k k Integration Newton Iteration
Presentation transcript:

The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust More General D

Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0 = 0, r 0 = b, p 0 = r 0 for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (p T k-1 Ap k-1 ) step length x k = x k-1 + α k p k-1 approx solution r k = r k-1 – α k Ap k-1 residual β k = (r T k r k ) / (r T k-1 r k-1 ) improvement p k = r k + β k p k-1 search direction

Conjugate gradient: Krylov subspaces Eigenvalues: Au = λu { λ 1, λ 2,..., λ n } Cayley-Hamilton theorem: (A – λ 1 I)·(A – λ 2 I) · · · (A – λ n I) = 0 Therefore Σ c i A i = 0 for some c i so A -1 = Σ (–c i /c 0 ) A i–1 Krylov subspace: Therefore if Ax = b, then x = A -1 b and x  span (b, Ab, A 2 b,..., A n-1 b) = K n (A, b) 0  i  n 1  i  n

Conjugate gradient: Orthogonal sequences Krylov subspace: K i (A, b) = span (b, Ab, A 2 b,..., A i-1 b) Conjugate gradient algorithm: for i = 1, 2, 3,... find x i  K i (A, b) such that r i = (Ax i – b)  K i (A, b) Notice r i  K i+1 (A, b), so r i  r j for all j < i Similarly, the “directions” are A-orthogonal: (x i – x i-1 ) T ·A· (x j – x j-1 ) = 0 The magic: Short recurrences... A is symmetric => can get next residual and direction from the previous one, without saving them all.

Conjugate gradient: Convergence In exact arithmetic, CG converges in n steps (completely unrealistic!!) Accuracy after k steps of CG is related to: consider polynomials of degree k that are equal to 1 at 0. how small can such a polynomial be at all the eigenvalues of A? Thus, eigenvalues close together are good. Condition number: κ (A) = ||A|| 2 ||A -1 || 2 = λ max (A) / λ min (A) Residual is reduced by a constant factor by O(κ 1/2 (A)) iterations of CG.

(Matlab demo) CG on grid5(15) and bcsstk08 n steps of CG on bcsstk08

Lay out matrix and vectors by rows Hard part is matrix-vector product y = A*x Algorithm Each processor j: Broadcast x(j) Compute y(j) = A(j,:)*x May send more of x than needed Partition / reorder matrix to reduce communication x y P0 P1 P2 P3 P0 P1 P2 P3 Conjugate gradient: Parallel implementation

(Matlab demo) 2-way partition of eppstein mesh 8-way dice of eppstein mesh

Preconditioners Suppose you had a matrix B such that: 1.condition number κ (B -1 A) is small 2.By = z is easy to solve Then you could solve (B -1 A)x = B -1 b instead of Ax = b B = A is great for (1), not for (2) B = I is great for (2), not for (1) Domain-specific approximations sometimes work B = diagonal of A sometimes works Or, bring back the direct methods technology...

(Matlab demo) bcsstk08 with diagonal precond

Incomplete Cholesky factorization (IC, ILU) Compute factors of A by Gaussian elimination, but ignore fill Preconditioner B = R T R  A, not formed explicitly Compute B -1 z by triangular solves (in time nnz(A)) Total storage is O(nnz(A)), static data structure Either symmetric (IC) or nonsymmetric (ILU) x ARTRT R

(Matlab demo) bcsstk08 with ic precond

Incomplete Cholesky and ILU: Variants Allow one or more “levels of fill” unpredictable storage requirements Allow fill whose magnitude exceeds a “drop tolerance” may get better approximate factors than levels of fill unpredictable storage requirements choice of tolerance is ad hoc Partial pivoting (for nonsymmetric A) “Modified ILU” (MIC): Add dropped fill to diagonal of U or R A and R T R have same row sums good in some PDE contexts

Incomplete Cholesky and ILU: Issues Choice of parameters good: smooth transition from iterative to direct methods bad: very ad hoc, problem-dependent tradeoff: time per iteration (more fill => more time) vs # of iterations (more fill => fewer iters) Effectiveness condition number usually improves (only) by constant factor (except MIC for some problems from PDEs) still, often good when tuned for a particular class of problems Parallelism Triangular solves are not very parallel Reordering for parallel triangular solve by graph coloring

(Matlab demo) 2-coloring of grid5(15)

Sparse approximate inverses Compute B -1  A explicitly Minimize || B -1 A – I || F (in parallel, by columns) Variants: factored form of B -1, more fill,.. Good: very parallel Bad: effectiveness varies widely AB -1

Support graph preconditioners: example Support graph preconditioners: example [Vaidya] A is symmetric positive definite with negative off-diagonal nzs B is a maximum-weight spanning tree for A (with diagonal modified to preserve row sums) factor B in O(n) space and O(n) time applying the preconditioner costs O(n) time per iteration G(A) G(B)

support each edge of A by a path in B dilation(A edge) = length of supporting path in B congestion(B edge) = # of supported A edges p = max congestion, q = max dilation condition number κ (B -1 A) bounded by p·q (at most O(n 2 )) G(A) G(B) Support graph preconditioners: example

can improve congestion and dilation by adding a few strategically chosen edges to B cost of factor+solve is O(n 1.75 ), or O(n 1.2 ) if A is planar in recent experiments [Chen & Toledo], often better than drop-tolerance MIC for 2D problems, but not for 3D. G(A) G(B) Support graph preconditioners: example

Domain decomposition (introduction) Partition the problem (e.g. the mesh) into subdomains Use solvers for the subdomains B -1 and C -1 to precondition an iterative solver on the interface Interface system is the Schur complement: S = G – E T B -1 E – F T C -1 F Parallelizes naturally by subdomains B0E 0CF ETET FTFT G A =

(Matlab demo) grid and matrix structure for overlapping 2-way partition of eppstein

Multigrid (introduction) For a PDE on a fine mesh, precondition using a solution on a coarser mesh Use idea recursively on hierarchy of meshes Solves the model problem (Poisson’s eqn) in linear time! Often useful when hierarchy of meshes can be built Hard to parallelize coarse meshes well This is just the intuition – lots of theory and technology

Other Krylov subspace methods Nonsymmetric linear systems: GMRES: for i = 1, 2, 3,... find x i  K i (A, b) such that r i = (Ax i – b)  K i (A, b) But, no short recurrence => save old vectors => lots more space BiCGStab, QMR, etc.: Two spaces K i (A, b) and K i (A T, b) w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust Convergence and preconditioning more delicate than CG Active area of current research Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric)

The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More General

Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any well- shaped finite element mesh

Complexity of linear solvers 2D3D Sparse Cholesky:O(n 1.5 )O(n 2 ) CG, exact arithmetic: O(n 2 ) CG, no precond: O(n 1.5 )O(n 1.33 ) CG, modified IC: O(n 1.25 )O(n 1.17 ) CG, support trees: O(n 1.20 )O(n 1.75 ) Multigrid:O(n) n 1/2 n 1/3 Time to solve model problem (Poisson’s equation) on regular mesh