Communication-Avoiding Algorithms Jim Demmel EECS & Math Departments UC Berkeley.

Slides:



Advertisements
Similar presentations
Memory Hierarchy Design Adapted from CS 252 Spring 2006 UC Berkeley Copyright (C) 2006 UCB.
Advertisements

1 Parallel Scientific Computing: Algorithms and Tools Lecture #2 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
55:035 Computer Architecture and Organization Lecture 7 155:035 Computer Architecture and Organization.
Program Analysis and Tuning The German High Performance Computing Centre for Climate and Earth System Research Panagiotis Adamidis.
CS 240A : Matrix multiplication Matrix multiplication I : parallel issues Matrix multiplication II: cache issues Thanks to Jim Demmel and Kathy Yelick.
Parallel System Performance CS 524 – High-Performance Computing.
Avoiding Communication in Linear Algebra Jim Demmel UC Berkeley bebop.cs.berkeley.edu.
Understanding Application Scaling NAS Parallel Benchmarks 2.2 on NOW and SGI Origin 2000 Frederick Wong, Rich Martin, Remzi Arpaci-Dusseau, David Wu, and.
Latency Tolerance: what to do when it just won’t go away CS 258, Spring 99 David E. Culler Computer Science Division U.C. Berkeley.
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
Languages and Compilers for High Performance Computing Kathy Yelick EECS Department U.C. Berkeley.
Supporting Systolic and Memory Communication in iWarp (Borkar et al. 1990) presented by Vasily Volkov CS258, Spring 2008, UC Berkeley.
Minisymposia 9 and 34: Avoiding Communication in Linear Algebra Jim Demmel UC Berkeley bebop.cs.berkeley.edu.
Analysis and Performance Results of a Molecular Modeling Application on Merrimac Erez, et al. Stanford University 2004 Presented By: Daniel Killebrew.
Communication [Lower] Bounds for Heterogeneous Architectures Julian Bui.
1 Recap. 2 No. of Processors C.P.I Computational Power Improvement Multiprocessor Uniprocessor.
Parallel System Performance CS 524 – High-Performance Computing.
CS 240A: Complexity Measures for Parallel Computation.
Bandwidth Rocks (1) Latency Lags Bandwidth (last ~20 years) Performance Milestones Disk: 3600, 5400, 7200, 10000, RPM.
DCL Concepts STL Concepts ContainerIteratorAlgorithmFunctorAdaptor What New Concepts are Needed for a “DCL”? (Distributed Computing Library) Distributed.
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
Lecture 37: Chapter 7: Multiprocessors Today’s topic –Introduction to multiprocessors –Parallelism in software –Memory organization –Cache coherence 1.
GPGPU overview. Graphics Processing Unit (GPU) GPU is the chip in computer video cards, PS3, Xbox, etc – Designed to realize the 3D graphics pipeline.
1 Minimizing Communication in Numerical Linear Algebra Introduction, Technological Trends Jim Demmel EECS & Math Departments,
Semiconductor Memory 1970 Fairchild Size of a single core –i.e. 1 bit of magnetic core storage Holds 256 bits Non-destructive read Much faster than core.
Computers Central Processor Unit. Basic Computer System MAIN MEMORY ALUCNTL..... BUS CONTROLLER Processor I/O moduleInterconnections BUS Memory.
This module created with support form NSF CDER Early Adopter Program Module developed Fall 2014 by Apan Qasem Parallel Performance: Analysis and Evaluation.
Message Passing Computing 1 iCSC2015,Helvi Hartmann, FIAS Message Passing Computing Lecture 1 High Performance Computing Helvi Hartmann FIAS Inverted CERN.
A Detailed Discussion of SRAM Niels Asmussen Maggie Hamill William Hunt.
RAM, PRAM, and LogP models
Super computers Parallel Processing By Lecturer: Aisha Dawood.
PARALLEL APPLICATIONS EE 524/CS 561 Kishore Dhaveji 01/09/2000.
Parallel Scaling of parsparsecircuit3.c Tim Warburton.
Copyright © 2011 Curt Hill MIMD Multiple Instructions Multiple Data.
April 14, 2004 The Distributed Performance Consultant: Automated Performance Diagnosis on 1000s of Processors Philip C. Roth Computer.
Minimizing Communication in Numerical Linear Algebra Optimizing Krylov Subspace Methods Jim Demmel EECS & Math Departments,
CS/EE 5810 CS/EE 6810 F00: 1 Main Memory. CS/EE 5810 CS/EE 6810 F00: 2 Main Memory Bottom Rung of the Memory Hierarchy 3 important issues –capacity »BellÕs.
1 Basic Components of a Parallel (or Serial) Computer CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 8 October 23, 2002 Nayda G. Santiago.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Extreme Computing’05 Parallel Graph Algorithms: Architectural Demands of Pathological Applications Bruce Hendrickson Jonathan Berry Keith Underwood Sandia.
Data Structures and Algorithms in Parallel Computing Lecture 8.
Memory Hierarchy David Kilgore CS 147 Dr. Lee Spring 2008.
Spring EE 437 Lillevik 437s06-l22 University of Portland School of Engineering Advanced Computer Architecture Lecture 22 Distributed computer Interconnection.
Complexity Measures for Parallel Computation. Problem parameters: nindex of problem size pnumber of processors Algorithm parameters: t p running time.
CPS 258 Announcements –Lecture calendar with slides –Pointers to related material.
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
Processor Level Parallelism 2. How We Got Here Developments in PC CPUs.
Algorithms for Supercomputers Upper bounds: from sequential to parallel Oded Schwartz Seminar: Sunday, 12-2pm, B410 Workshop: Sunday, 2-5pm High performance.
Algorithms for Supercomputers Introduction Oded Schwartz Seminar: Sunday, 12-2pm, B410 Workshop: ? High performance Fault tolerance
Lynn Choi School of Electrical Engineering
Resource Aware Scheduler – Initial Results
Concurrent Data Structures for Near-Memory Computing
The Problem Finding a needle in haystack An expert (CPU)
Multi-Processing in High Performance Computer Architecture:
Local secondary storage (local disks)
Minimizing Communication in Linear Algebra
Multi-Processing in High Performance Computer Architecture:
Directory-based Protocol
Complexity Measures for Parallel Computation
CS 140 Lecture Notes: Technology and Operating Systems
CS 140 : Matrix multiplication
P A R A L L E L C O M P U T I N G L A B O R A T O R Y
Interconnect with Cache Coherency Manager
CPU Design & Computer Arithmetic
The Memory-Processor Gap
Computer System Design (Processor Design)
Distributed Computing:
Complexity Measures for Parallel Computation
Vrije Universiteit Amsterdam
Presentation transcript:

Communication-Avoiding Algorithms Jim Demmel EECS & Math Departments UC Berkeley

2 Why avoid communication? (1/3) Algorithms have two costs (measured in time or energy): 1.Arithmetic (FLOPS) 2.Communication: moving data between – levels of a memory hierarchy (sequential case) – processors over a network (parallel case). CPU Cache DRAM CPU DRAM CPU DRAM CPU DRAM CPU DRAM

Why avoid communication? (2/3) Running time of an algorithm is sum of 3 terms: – # flops * time_per_flop – # words moved / bandwidth – # messages * latency 3 communication Time_per_flop << 1/ bandwidth << latency Gaps growing exponentially with time [FOSC] Avoid communication to save time Goal : reorganize algorithms to avoid communication Between all memory hierarchy levels L1 L2 DRAM network, etc Very large speedups possible Energy savings too! Annual improvements Time_per_flopBandwidthLatency Network26%15% DRAM23%5% 59%

Why Minimize Communication? (3/3) Source: John Shalf, LBL

Why Minimize Communication? (3/3) Source: John Shalf, LBL Minimize communication to save energy

Goals 6 Redesign algorithms to avoid communication Between all memory hierarchy levels L1 L2 DRAM network, etc Attain lower bounds if possible Current algorithms often far from lower bounds Large speedups and energy savings possible

“New Algorithm Improves Performance and Accuracy on Extreme-Scale Computing Systems. On modern computer architectures, communication between processors takes longer than the performance of a floating point arithmetic operation by a given processor. ASCR researchers have developed a new method, derived from commonly used linear algebra methods, to minimize communications between processors and the memory hierarchy, by reformulating the communication patterns specified within the algorithm. This method has been implemented in the TRILINOS framework, a highly-regarded suite of software, which provides functionality for researchers around the world to solve large scale, complex multi-physics problems.” FY 2010 Congressional Budget, Volume 4, FY2010 Accomplishments, Advanced Scientific Computing Research (ASCR), pages President Obama cites Communication-Avoiding Algorithms in the FY 2012 Department of Energy Budget Request to Congress: CA-GMRES (Hoemmen, Mohiyuddin, Yelick, JD) “Tall-Skinny” QR (Grigori, Hoemmen, Langou, JD)

Outline Survey state of the art of CA (Comm-Avoiding) algorithms – CA O(n 3 ) 2.5D Matmul – CA Strassen Matmul Beyond linear algebra – Extending lower bounds to any algorithm with arrays – Communication-optimal N-body algorithm CA-Krylov methods

Collaborators and Supporters Michael Christ, Jack Dongarra, Ioana Dumitriu, Armando Fox, David Gleich, Laura Grigori, Ming Gu, Mike Heroux, Mark Hoemmen, Olga Holtz, Kurt Keutzer, Julien Langou, Tom Scanlon, Michelle Strout, Sam Williams, Hua Xiang, Kathy Yelick Michael Anderson, Grey Ballard, Austin Benson, Abhinav Bhatele, Aydin Buluc, Erin Carson, Maryam Dehnavi, Michael Driscoll, Evangelos Georganas, Nicholas Knight, Penporn Koanantakool, Ben Lipshitz, Marghoob Mohiyuddin, Oded Schwartz, Edgar Solomonik Other members of ParLab, BEBOP, CACHE, EASI, FASTMath, MAGMA, PLASMA, TOPS projects Thanks to NSF, DOE, UC Discovery, Intel, Microsoft, Mathworks, National Instruments, NEC, Nokia, NVIDIA, Samsung, Oracle bebop.cs.berkeley.edu

Outline Survey state of the art of CA (Comm-Avoiding) algorithms – CA O(n 3 ) 2.5D Matmul – CA Strassen Matmul Beyond linear algebra – Extending lower bounds to any algorithm with arrays – Communication-optimal N-body algorithm CA-Krylov methods

Summary of CA Linear Algebra “Direct” Linear Algebra Lower bounds on communication for linear algebra problems like Ax=b, least squares, Ax = λx, SVD, etc Not attained by algorithms in standard libraries New algorithms that attain these lower bounds Being added to libraries: Sca/LAPACK, PLASMA, MAGMA Large speed-ups possible Autotuning to find optimal implementation Ditto for “Iterative” Linear Algebra

Lower bound for all “n 3 -like” linear algebra Holds for – Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, … – Some whole programs (sequences of these operations, no matter how individual ops are interleaved, eg A k ) – Dense and sparse matrices (where #flops << n 3 ) – Sequential and parallel algorithms – Some graph-theoretic algorithms (eg Floyd-Warshall) 12 Let M = “fast” memory size (per processor) #words_moved (per processor) =  (#flops (per processor) / M 1/2 ) #messages_sent (per processor) =  (#flops (per processor) / M 3/2 ) Parallel case: assume either load or memory balanced

Lower bound for all “n 3 -like” linear algebra Holds for – Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, … – Some whole programs (sequences of these operations, no matter how individual ops are interleaved, eg A k ) – Dense and sparse matrices (where #flops << n 3 ) – Sequential and parallel algorithms – Some graph-theoretic algorithms (eg Floyd-Warshall) 13 Let M = “fast” memory size (per processor) #words_moved (per processor) =  (#flops (per processor) / M 1/2 ) #messages_sent ≥ #words_moved / largest_message_size Parallel case: assume either load or memory balanced

Lower bound for all “n 3 -like” linear algebra Holds for – Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, … – Some whole programs (sequences of these operations, no matter how individual ops are interleaved, eg A k ) – Dense and sparse matrices (where #flops << n 3 ) – Sequential and parallel algorithms – Some graph-theoretic algorithms (eg Floyd-Warshall) 14 Let M = “fast” memory size (per processor) #words_moved (per processor) =  (#flops (per processor) / M 1/2 ) #messages_sent (per processor) =  (#flops (per processor) / M 3/2 ) Parallel case: assume either load or memory balanced SIAM SIAG/Linear Algebra Prize, 2012 Ballard, D., Holtz, Schwartz

Can we attain these lower bounds? Do conventional dense algorithms as implemented in LAPACK and ScaLAPACK attain these bounds? – Often not If not, are there other algorithms that do? – Yes, for much of dense linear algebra – New algorithms, with new numerical properties, new ways to encode answers, new data structures – Not just loop transformations (need those too!) Only a few sparse algorithms so far Lots of work in progress 15

Summary of dense parallel algorithms attaining communication lower bounds 16 Assume nxn matrices on P processors Minimum Memory per processor = M = O(n 2 / P) Recall lower bounds: #words_moved =  ( (n 3 / P) / M 1/2 ) =  ( n 2 / P 1/2 ) #messages =  ( (n 3 / P) / M 3/2 ) =  ( P 1/2 ) Does ScaLAPACK attain these bounds? For #words_moved: mostly, except nonsym. Eigenproblem For #messages: asymptotically worse, except Cholesky New algorithms attain all bounds, up to polylog(P) factors Cholesky, LU, QR, Sym. and Nonsym eigenproblems, SVD Can we do Better?

Can we do better? Aren’t we already optimal? Why assume M = O(n 2 /p), i.e. minimal? – Lower bound still true (and smaller) if M larger – Can we attain it?

Outline Survey state of the art of CA (Comm-Avoiding) algorithms – CA O(n 3 ) 2.5D Matmul – CA Strassen Matmul Beyond linear algebra – Extending lower bounds to any algorithm with arrays – Communication-optimal N-body algorithm CA-Krylov methods

19 SUMMA– n x n matmul on P 1/2 x P 1/2 grid (nearly) optimal using minimum memory M=O(n 2 /P) For k=0 to n-1 … or n/b-1 where b is the block size … = # cols in A(i,k) and # rows in B(k,j) for all i = 1 to p r … in parallel owner of A(i,k) broadcasts it to whole processor row for all j = 1 to p c … in parallel owner of B(k,j) broadcasts it to whole processor column Receive A(i,k) into Acol Receive B(k,j) into Brow C_myproc = C_myproc + Acol * Brow For k=0 to n/b-1 for all i = 1 to P 1/2 owner of A(i,k) broadcasts it to whole processor row (using binary tree) for all j = 1 to P 1/2 owner of B(k,j) broadcasts it to whole processor column (using bin. tree) Receive A(i,k) into Acol Receive B(k,j) into Brow C_myproc = C_myproc + Acol * Brow * = i j A(i,k) k k B(k,j) C(i,j) Brow Acol Using more than the minimum memory What if matrix small enough to fit c>1 copies, so M = cn 2 /P ? – #words_moved = Ω( #flops / M 1/2 ) = Ω( n 2 / ( c 1/2 P 1/2 )) – #messages = Ω( #flops / M 3/2 ) = Ω( P 1/2 /c 3/2 ) Can we attain new lower bound?

2.5D Matrix Multiplication Assume can fit cn 2 /P data per processor, c > 1 Processors form (P/c) 1/2 x (P/c) 1/2 x c grid c (P/c) 1/2 Example: P = 32, c = 2

2.5D Matrix Multiplication Assume can fit cn 2 /P data per processor, c > 1 Processors form (P/c) 1/2 x (P/c) 1/2 x c grid k j i Initially P(i,j,0) owns A(i,j) and B(i,j) each of size n(c/P) 1/2 x n(c/P) 1/2 (1) P(i,j,0) broadcasts A(i,j) and B(i,j) to P(i,j,k) (2) Processors at level k perform 1/c-th of SUMMA, i.e. 1/c-th of Σ m A(i,m)*B(m,j) (3) Sum-reduce partial sums Σ m A(i,m)*B(m,j) along k-axis so P(i,j,0) owns C(i,j)

2.5D Matmul on BG/P, 16K nodes / 64K cores

c = 16 copies Distinguished Paper Award, EuroPar’11 (Solomonik, D.) SC’11 paper by Solomonik, Bhatele, D.

Perfect Strong Scaling – in Time and Energy Every time you add a processor, you should use its memory M too Start with minimal number of procs: PM = 3n 2 Increase P by a factor of c  total memory increases by a factor of c Notation for timing model: – γ T, β T, α T = secs per flop, per word_moved, per message of size m T(cP) = n 3 /(cP) [ γ T + β T /M 1/2 + α T /(mM 1/2 ) ] = T(P)/c Notation for energy model: – γ E, β E, α E = joules for same operations – δ E = joules per word of memory used per sec – ε E = joules per sec for leakage, etc. E(cP) = cP { n 3 /(cP) [ γ E + β E /M 1/2 + α E /(mM 1/2 ) ] + δ E MT(cP) + ε E T(cP) } = E(P) Perfect scaling extends to n-body, Strassen, …

Ongoing Work Lots more work on – Algorithms: LDL T, QR with pivoting, other pivoting schemes, eigenproblems, … All-pairs-shortest-path, … Both 2D (c=1) and 2.5D (c>1) – Platforms: Multicore, cluster, GPU, cloud, heterogeneous, low-energy, … – Software: Integration into Sca/LAPACK, PLASMA, MAGMA,… Integration into applications (on IBM BG/Q) – Qbox (with LLNL, IBM): molecular dynamics – CTF (with ANL): symmetric tensor contractions

Outline Survey state of the art of CA (Comm-Avoiding) algorithms – CA O(n 3 ) 2.5D Matmul – CA Strassen Matmul Beyond linear algebra – Extending lower bounds to any algorithm with arrays – Communication-optimal N-body algorithm CA-Krylov methods

Communication Lower Bounds for Strassen-like matmul algorithms Proof: graph expansion (different from classical matmul) – Strassen-like: DAG must be “regular” and connected Extends up to M = n 2 / p 2/ω Best Paper Prize (SPAA’11), Ballard, D., Holtz, Schwartz to appear in JACM Is the lower bound attainable? Classical O(n 3 ) matmul: #words_moved = Ω (M(n/M 1/2 ) 3 /P) Strassen’s O(n lg7 ) matmul: #words_moved = Ω (M(n/M 1/2 ) lg7 /P) Strassen-like O(n ω ) matmul: #words_moved = Ω (M(n/M 1/2 ) ω /P)

28 Strong Scaling of Strassen and other Matmuls Franklin (Cray XT4) n = Speedups: 24%-184% (over previous Strassen-based algorithms)

29 Strong Scaling of Strassen and other Matmuls Franklin (Cray XT4) n = Speedups: 24%-184% (over previous Strassen-based algorithms) For details, see SC12 talk on “Communication-Avoiding Parallel Strassen” by Grey Ballard et al 11/15 at 4pm

Outline Survey state of the art of CA (Comm-Avoiding) algorithms – CA O(n 3 ) 2.5D Matmul – CA Strassen Matmul Beyond linear algebra – Extending lower bounds to any algorithm with arrays – Communication-optimal N-body algorithm CA-Krylov methods

Recall optimal sequential Matmul Naïve code for i=1:n, for j=1:n, for k=1:n, C(i,j)+=A(i,k)*B(k,j) “Blocked” code for i1 = 1:b:n, for j1 = 1:b:n, for k1 = 1:b:n for i2 = 0:b-1, for j2 = 0:b-1, for k2 = 0:b-1 i=i1+i2, j = j1+j2, k = k1+k2 C(i,j)+=A(i,k)*B(k,j) Thm: Picking b = M 1/2 attains lower bound: #words_moved = Ω(n 3 /M 1/2 ) Where does 1/2 come from? b x b matmul

New Thm applied to Matmul for i=1:n, for j=1:n, for k=1:n, C(i,j) += A(i,k)*B(k,j) Record array indices in matrix Δ Solve LP for x = [xi,xj,xk] T : max 1 T x s.t. Δ x ≤ 1 – Result: x = [1/2, 1/2, 1/2] T, 1 T x = 3/2 = s Thm: #words_moved = Ω(n 3 /M S-1 )= Ω(n 3 /M 1/2 ) Attained by block sizes M xi,M xj,M xk = M 1/2,M 1/2,M 1/2 ijk 101A Δ =011B 110C

New Thm applied to Direct N-Body for i=1:n, for j=1:n, F(i) += force( P(i), P(j) ) Record array indices in matrix Δ Solve LP for x = [xi,xj] T : max 1 T x s.t. Δ x ≤ 1 – Result: x = [1,1], 1 T x = 2 = s Thm: #words_moved = Ω(n 2 /M S-1 )= Ω(n 2 /M 1 ) Attained by block sizes M xi,M xj = M 1,M 1 ij 10F Δ =10P(i) 01P(j)

N-Body Speedups on IBM-BG/P (Intrepid) 8K cores, 32K particles 11.8x speedup K. Yelick, E. Georganas, M. Driscoll, P. Koanantakool, E. Solomonik

New Thm applied to Random Code for i1=1:n, for i2=1:n, …, for i6=1:n A1(i1,i3,i6) += func1(A2(i1,i2,i4),A3(i2,i3,i5),A4(i3,i4,i6)) A5(i2,i6) += func2(A6(i1,i4,i5),A3(i3,i4,i6)) Record array indices in matrix Δ Solve LP for x = [x1,…,x7] T : max 1 T x s.t. Δ x ≤ 1 – Result: x = [2/7,3/7,1/7,2/7,3/7,4/7], 1 T x = 15/7 = s Thm: #words_moved = Ω(n 6 /M S-1 )= Ω(n 6 /M 8/7 ) Attained by block sizes M 2/7,M 3/7,M 1/7,M 2/7,M 3/7,M 4/7 i1i2i3i4i5i A A2 Δ = A A3,A A A6

Summary and Ongoing Work Communication lower bounds extend to any program accessing arrays with subscripts that are “linear functions” of loop indices – Ex: A(i1+2*i2,i3-i2), B(pointer(i4+i5)), … – Sparsity, indirect addressing, parallelism all included When can we write down LP for lower bound explicitly? – General case hard: Hilbert’s 10 th problem over rationals – Works for many special cases – Ex: subscripts are subsets of loop indices – Can always approximate lower bound When can we attain lower bound? – Works for many special cases (if loop dependencies permit) – Have yet to find a case where we cannot attain lower bound – Can we prove this? Can extend “perfect scaling” results for time and energy Incorporate into compilers

Outline Survey state of the art of CA (Comm-Avoiding) algorithms – CA O(n 3 ) 2.5D Matmul – CA Strassen Matmul Beyond linear algebra – Extending lower bounds to any algorithm with arrays – Communication-optimal N-body algorithm CA-Krylov methods

Avoiding Communication in Iterative Linear Algebra k-steps of iterative solver for sparse Ax=b or Ax=λx – Does k SpMVs with A and starting vector – Many such “Krylov Subspace Methods” Conjugate Gradients (CG), GMRES, Lanczos, Arnoldi, … Goal: minimize communication – Assume matrix “well-partitioned” – Serial implementation Conventional: O(k) moves of data from slow to fast memory New: O(1) moves of data – optimal – Parallel implementation on p processors Conventional: O(k log p) messages (k SpMV calls, dot prods) New: O(log p) messages - optimal Lots of speed up possible (modeled and measured) – Price: some redundant computation – Challenges: Poor partitioning, Preconditioning, Stability 38

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3 Works for any “well-partitioned” A

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1Step 2

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1Step 2Step 3

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1Step 2Step 3Step 4

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Each processor communicates once with neighbors Proc 1Proc 2Proc 3Proc 4

… … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Each processor works on (overlapping) trapezoid Proc 1Proc 2Proc 3Proc 4

Same idea works for general sparse matrices Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Simple block-row partitioning  (hyper)graph partitioning Top-to-bottom processing   Traveling Salesman Problem

Minimizing Communication of GMRES to solve Ax=b GMRES: find x in span{b,Ab,…,A k b} minimizing || Ax-b || 2 Standard GMRES for i=1 to k w = A · v(i-1) … SpMV MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H Communication-avoiding GMRES W = [ v, Av, A 2 v, …, A k v ] [Q,R] = TSQR(W) … “Tall Skinny QR” build H from R solve LSQ problem with H Oops – W from power method, precision lost! 52 Sequential case: #words moved decreases by a factor of k Parallel case: #messages decreases by a factor of k

“Monomial” basis [Ax,…,A k x] fails to converge Different polynomial basis [p 1 (A)x,…,p k (A)x] does converge 53

Speed ups of GMRES on 8-core Intel Clovertown [MHDY09] 54 Requires Co-tuning Kernels

NaiveMonomialNewtonChebyshev Replacement Its.74 (1)[7, 15, 24, 31, …, 92, 97, 103] (17) [67, 98] (2)68 (1) With Residual Replacement (RR) a la Van der Vorst and Ye

Summary of Iterative Linear Algebra New lower bounds, optimal algorithms, big speedups in theory and practice Lots of other progress, open problems – Many different algorithms reorganized More underway, more to be done – Need to recognize stable variants more easily – Preconditioning Hierarchically Semiseparable Matrices – Autotuning and synthesis pOSKI for SpMV – available at bebop.cs.berkeley.edu Different kinds of “sparse matrices”

For more details Bebop.cs.berkeley.edu CS267 – Berkeley’s Parallel Computing Course – Live broadcast in Spring All slides, video available from 2012 – Prerecorded version planned in Spring Free supercomputer accounts to do homework! Free autograding of homework!

Summary Don’t Communic… 59 Time to redesign all linear algebra, n-body, … algorithms and software (and compilers)