CS240A: Conjugate Gradients and the Model Problem.

Slides:



Advertisements
Similar presentations
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
Advertisements

CSE 245: Computer Aided Circuit Simulation and Verification
Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
CS 290H 7 November Introduction to multigrid methods
SOLVING THE DISCRETE POISSON EQUATION USING MULTIGRID ROY SROR ELIRAN COHEN.
Applied Linear Algebra - in honor of Hans SchneiderMay 25, 2010 A Look-Back Technique of Restart for the GMRES(m) Method Akira IMAKURA † Tomohiro SOGABE.
Parallelizing stencil computations Based on slides from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Solution of linear system of equations
1cs542g-term Notes  Assignment 1 is out (questions?)
CSCI 317 Mike Heroux1 Sparse Matrix Computations CSCI 317 Mike Heroux.
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
CS267 L24 Solving PDEs.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 24: Solving Linear Systems arising from PDEs - I James Demmel.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
CS267 L12 Sources of Parallelism(3).1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 12: Sources of Parallelism and Locality (Part 3)
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
CS 240A: Solving Ax = b in parallel °Dense A: Gaussian elimination with partial pivoting Same flavor as matrix * matrix, but more complicated °Sparse A:
CS267 L11 Sources of Parallelism(2).1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 11: Sources of Parallelism and Locality (Part 2)
Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
CS267 L24 Solving PDEs.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 24: Solving Linear Systems arising from PDEs - I James Demmel.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Conjugate gradients, sparse matrix-vector multiplication, graphs, and meshes Thanks to Aydin Buluc, Umit Catalyurek, Alan Edelman, and Kathy Yelick for.
Support-Graph Preconditioning John R. Gilbert MIT and U. C. Santa Barbara coauthors: Marshall Bern, Bruce Hendrickson, Nhat Nguyen, Sivan Toledo authors.
Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any.
Scientific Computing Partial Differential Equations Poisson Equation.
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
Computation on meshes, sparse matrices, and graphs Some slides are from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
CS 219: Sparse Matrix Algorithms
Administrivia: May 20, 2013 Course project progress reports due Wednesday. Reading in Multigrid Tutorial: Chapters 3-4: Multigrid cycles and implementation.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Parallel Solution of the Poisson Problem Using MPI
CS240A: Conjugate Gradients and the Model Problem.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Administrivia: October 5, 2009 Homework 1 due Wednesday Reading in Davis: Skim section 6.1 (the fill bounds will make more sense next week) Read section.
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation.
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Krylov-Subspace Methods - I
Model Problem: Solving Poisson’s equation for temperature
CS 290N / 219: Sparse Matrix Algorithms
Solving Linear Systems Ax=b
Solving Poisson Equations Using Least Square Technique in Image Editing Colin Zheng Yi Li.
Computation on meshes, sparse matrices, and graphs
CSE 245: Computer Aided Circuit Simulation and Verification
Computational meshes, matrices, conjugate gradients, and mesh partitioning Some slides are from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy.
CSCE569 Parallel Computing
Conjugate Gradient Method
Redundant Ghost Nodes in Jacobi
Administrivia: November 9, 2009
Ax = b Methods for Solution of the System of Equations:
Presentation transcript:

CS240A: Conjugate Gradients and the Model Problem

Model Problem: Solving Poisson’s equation for temperature For each i from 1 to n, except on the boundaries: – t(i-k) – t(i-1) + 4*t(i) – t(i+1) – t(i+k) = 0 n equations in n unknowns: A*t = b Each row of A has at most 5 nonzeros In three dimensions, k = n 1/3 and each row has at most 7 nzs k = n 1/2

The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust More General

Complexity of linear solvers 2D3D Sparse Cholesky:O(n 1.5 )O(n 2 ) CG, exact arithmetic: O(n 2 ) CG, no precond: O(n 1.5 )O(n 1.33 ) CG, modified IC: O(n 1.25 )O(n 1.17 ) CG, support trees: O(n 1.20 ) -> O(n 1+ )O(n 1.75 ) -> O(n 1+ ) Multigrid:O(n) n 1/2 n 1/3 Time to solve model problem (Poisson’s equation) on regular mesh

CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) See Jim Demmel’s slides Same flavor as matrix * matrix, but more complicated Sparse A: Iterative methods – Conjugate gradient, etc. Sparse matrix times dense vector Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting See Jim Demmel’s slides Same flavor as matrix * matrix, but more complicated Sparse A: Iterative methods – Conjugate gradient etc. Sparse matrix times dense vector Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

Conjugate gradient iteration for Ax = b x 0 = 0 approx solution r 0 = b residual = b - Ax d 0 = r 0 search direction for k = 1, 2, 3,... x k = x k-1 + … new approx solution r k = … new residual d k = … new search direction

Conjugate gradient iteration for Ax = b x 0 = 0 approx solution r 0 = b residual = b - Ax d 0 = r 0 search direction for k = 1, 2, 3,... α k = … step length x k = x k-1 + α k d k-1 new approx solution r k = … new residual d k = … new search direction

Conjugate gradient iteration for Ax = b x 0 = 0 approx solution r 0 = b residual = b - Ax d 0 = r 0 search direction for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 new approx solution r k = … new residual d k = … new search direction

Conjugate gradient iteration for Ax = b x 0 = 0 approx solution r 0 = b residual = b - Ax d 0 = r 0 search direction for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 new approx solution r k = … new residual β k = (r T k r k ) / (r T k-1 r k-1 ) d k = r k + β k d k-1 new search direction

Conjugate gradient iteration for Ax = b x 0 = 0 approx solution r 0 = b residual = b - Ax d 0 = r 0 search direction for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 new approx solution r k = r k-1 – α k Ad k-1 new residual β k = (r T k r k ) / (r T k-1 r k-1 ) d k = r k + β k d k-1 new search direction

Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0 = 0, r 0 = b, d 0 = r 0 for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 approx solution r k = r k-1 – α k Ad k-1 residual β k = (r T k r k ) / (r T k-1 r k-1 ) improvement d k = r k + β k d k-1 search direction

Conjugate gradient: Krylov subspaces Eigenvalues: Av = λv { λ 1, λ 2,..., λ n } Cayley-Hamilton theorem: (A – λ 1 I)·(A – λ 2 I) · · · (A – λ n I) = 0 Therefore Σ c i A i = 0 for some c i so A -1 = Σ (–c i /c 0 ) A i–1 Krylov subspace: Therefore if Ax = b, then x = A -1 b and x  span (b, Ab, A 2 b,..., A n-1 b) = K n (A, b ) 0  i  n 1  i  n

Conjugate gradient: Orthogonal sequences Krylov subspace: K i (A, b) = span (b, Ab, A 2 b,..., A i-1 b) Conjugate gradient algorithm: for i = 1, 2, 3,... find x i  K i (A, b) such that r i = (b – Ax i )  K i (A, b) Notice r i  K i+1 (A, b), so r i  r j for all j < i Similarly, the “directions” are A-orthogonal: (x i – x i-1 ) T ·A· (x j – x j-1 ) = 0 The magic: Short recurrences... A is symmetric => can get next residual and direction from the previous one, without saving them all.

Conjugate gradient: Convergence In exact arithmetic, CG converges in n steps (completely unrealistic!!) Accuracy after k steps of CG is related to: consider polynomials of degree k that are equal to 1 at 0. how small can such a polynomial be at all the eigenvalues of A? Thus, eigenvalues close together are good. Condition number: κ (A) = ||A|| 2 ||A -1 || 2 = λ max (A) / λ min (A) Residual is reduced by a constant factor by O(κ 1/2 (A)) iterations of CG.

Other Krylov subspace methods Nonsymmetric linear systems: GMRES: for i = 1, 2, 3,... find x i  K i (A, b) such that r i = (Ax i – b)  K i (A, b) But, no short recurrence => save old vectors => lots more space (Usually “restarted” every k iterations to use less space.) BiCGStab, QMR, etc.: Two spaces K i (A, b) and K i (A T, b) w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust Convergence and preconditioning more delicate than CG Active area of current research Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric)

Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0 = 0, r 0 = b, d 0 = r 0 for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (d T k-1 Ad k-1 ) step length x k = x k-1 + α k d k-1 approx solution r k = r k-1 – α k Ad k-1 residual β k = (r T k r k ) / (r T k-1 r k-1 ) improvement d k = r k + β k d k-1 search direction

Conjugate gradient primitives DAXPY: v = α*v + β*w (vectors v, w; scalar α, β) Almost embarrassingly parallel DDOT : α = v T *w (vectors v, w; scalar α) Global sum reduction; span = log n Matvec: v = A*w (matrix A, vectors v, w) The hard part But all you need is a subroutine to compute v from w Sometimes (e.g. the model problem) you don’t even need to store A!

Model Problem: Solving Poisson’s equation for temperature For each i from 1 to n, except on the boundaries: – t(i-k) – t(i-1) + 4*t(i) – t(i+1) – t(i+k) = 0 n equations in n unknowns: A*t = b Each row of A has at most 5 nonzeros In three dimensions, k = n 1/3 and each row has at most 7 nzs k = n 1/2

Model Problem: Solving Poisson’s equation k = n 1/3 For each i from 1 to n, except on the boundaries: – x(i-k 2 ) – x(i-k) – x(i-1) + 6*x(i) – x(i+1) – x(i+k) – x(i+k 2 ) = 0 n equations in n unknowns: A*x = b Each row of A has at most 7 nonzeros In two dimensions, k = n 1/2 and each row has at most 5 nzs

Stencil computations Data lives at the vertices of a regular mesh Each step, new values are computed from neighbors Examples: Game of Life (9-point stencil) Matvec in 2D model problem (5-point stencil) Matvec in 3D model problem (7-point stencil)

Parallelism in Stencil Computations Parallelism is straightforward Mesh is regular data structure Even decomposition across processors gives load balance Locality is achieved by using large patches of the mesh boundary values from neighboring patches are needed

Where’s the data? Two possible answers: n grid nodes, p processors Each processor has a patch of n/p nodes Patch = consecutive rows: v = 2 * p * sqrt(n) Patch = square block: v = 4 * sqrt(p) * sqrt(n)

Ghost Nodes in Stencil Computations Size of ghost region (and redundant computation) depends on network/memory speed vs. computation Can be used on unstructured meshes To compute green Copy yellow Compute blue