James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09.

Slides:



Advertisements
Similar presentations
Statistical Modeling of Feedback Data in an Automatic Tuning System Richard Vuduc, James Demmel (U.C. Berkeley, EECS) Jeff.
Advertisements

The view from space Last weekend in Los Angeles, a few miles from my apartment…
Adaptable benchmarks for register blocked sparse matrix-vector multiplication ● Berkeley Benchmarking and Optimization group (BeBOP) ● Hormozd Gahvari.
Implementation of 2-D FFT on the Cell Broadband Engine Architecture William Lundgren Gedae), Kerry Barnes (Gedae), James Steed (Gedae)
Parallelizing stencil computations Based on slides from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Avoiding Communication in Linear Algebra Jim Demmel UC Berkeley bebop.cs.berkeley.edu.
POSKI: A Library to Parallelize OSKI Ankit Jain Berkeley Benchmarking and OPtimization (BeBOP) Project bebop.cs.berkeley.edu EECS Department, University.
Benchmarking Sparse Matrix-Vector Multiply In 5 Minutes Hormozd Gahvari, Mark Hoemmen, James Demmel, and Kathy Yelick January 21, 2007 Hormozd Gahvari,
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
Languages and Compilers for High Performance Computing Kathy Yelick EECS Department U.C. Berkeley.
Automatic Performance Tuning of Sparse Matrix Kernels Observations and Experience Performance tuning is tedious and time- consuming work. Richard Vuduc.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
Optimization of Sparse Matrix Kernels for Data Mining Eun-Jin Im and Katherine Yelick U.C.Berkeley.
Minisymposia 9 and 34: Avoiding Communication in Linear Algebra Jim Demmel UC Berkeley bebop.cs.berkeley.edu.
03/09/2007CS267 Lecture 161 CS 267 Sparse Matrices: Sparse Matrix-Vector Multiply for Iterative Solvers Kathy Yelick
Avoiding Communication in Linear Algebra Jim Demmel UC Berkeley bebop.cs.berkeley.edu.
Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel
Automatic Performance Tuning Sparse Matrix Algorithms James Demmel
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
Automatic Performance Tuning of Sparse Matrix Kernels Berkeley Benchmarking and OPtimization (BeBOP) Project James.
Automatic Performance Tuning Sparse Matrix Kernels James Demmel
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
The Future of Numerical Linear Algebra Automatic Performance Tuning of Sparse Matrix codes The Next LAPACK and ScaLAPACK
CS240A: Conjugate Gradients and the Model Problem.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel
Automatic Performance Tuning of Sparse Matrix Kernels: Recent Progress Jim Demmel, Kathy Yelick Berkeley Benchmarking and OPtimization (BeBOP) Project.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Minimizing Communication in Numerical Linear Algebra Sparse-Matrix-Vector-Multiplication (SpMV) Jim Demmel EECS & Math Departments,
Using Adaptive Methods for Updating/Downdating PageRank Gene H. Golub Stanford University SCCM Joint Work With Sep Kamvar, Taher Haveliwala.
Autotuning sparse matrix kernels Richard Vuduc Center for Applied Scientific Computing (CASC) Lawrence Livermore National Laboratory April 2, 2007.
Analytic Models and Empirical Search: A Hybrid Approach to Code Optimization A. Epshteyn 1, M. Garzaran 1, G. DeJong 1, D. Padua 1, G. Ren 1, X. Li 1,
Sparse Matrix-Vector Multiplication on Throughput-Oriented Processors
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
PDCS 2007 November 20, 2007 Accelerating the Complex Hessenberg QR Algorithm with the CSX600 Floating-Point Coprocessor Yusaku Yamamoto 1 Takafumi Miyata.
Accelerating the Singular Value Decomposition of Rectangular Matrices with the CSX600 and the Integrable SVD September 7, 2007 PaCT-2007, Pereslavl-Zalessky.
2007/11/2 First French-Japanese PAAP Workshop 1 The FFTE Library and the HPC Challenge (HPCC) Benchmark Suite Daisuke Takahashi Center for Computational.
Investigating Adaptive Compilation using the MIPSpro Compiler Keith D. Cooper Todd Waterman Department of Computer Science Rice University Houston, TX.
Sparse Matrix Vector Multiply Algorithms and Optimizations on Modern Architectures Ankit Jain, Vasily Volkov CS252 Final Presentation 5/9/2007
CS240A: Conjugate Gradients and the Model Problem.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Minimizing Communication in Numerical Linear Algebra Optimizing Krylov Subspace Methods Jim Demmel EECS & Math Departments,
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Data Structures and Algorithms in Parallel Computing Lecture 7.
Toward an Automatically Tuned Dense Symmetric Eigensolver for Shared Memory Machines Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
Potential Projects Jim Demmel CS294 Fall, 2011 Communication-Avoiding Algorithms
Performance of BLAS-3 Based Tridiagonalization Algorithms on Modern SMP Machines Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Parallel Programming & Cluster Computing Linear Algebra Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Michael J. Voss and Rudolf Eigenmann PPoPP, ‘01 (Presented by Kanad Sinha)
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Tools and Libraries for Manycore Computing Kathy Yelick U.C. Berkeley and LBNL.
LLNL-PRES This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Optimizing the Performance of Sparse Matrix-Vector Multiplication
University of California, Berkeley
Ioannis E. Venetis Department of Computer Engineering and Informatics
Automatic Performance Tuning of Sparse Matrix Kernels
Sparse Matrix-Vector Multiplication (Sparsity, Bebop)
for more information ... Performance Tuning
OSKI: A Library of Automatically Tuned Sparse Matrix Kernels
CSCE569 Parallel Computing
Avoiding Communication in Linear Algebra
Presentation transcript:

James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09

Berkeley Benchmarking and OPtimization (BeBOP) Prof. Katherine Yelick Current members Kaushik Datta, Ozan Demirlioglu, Mark Hoemmen, Shoaib Kamil, Rajesh Nishtala, Vasily Volkov, Sam Williams, … Previous members Hormozd Gahvari, Eun-Jim Im, Ankit Jain, Rich Vuduc, many undergrads Many results here from current, previous students bebop.cs.berkeley.edu

Outline Motivation for Automatic Performance Tuning Results for sparse matrix kernels OSKI = Optimized Sparse Kernel Interface Tuning Higher Level Algorithms Future Work, Class Projects Future Related Lecture Sam Williams on tuning SpMV for multicore, other emerging architectures

Motivation for Automatic Performance Tuning Writing high performance software is hard Make programming easier while getting high speed Ideal: program in your favorite high level language (Matlab, Python, PETSc…) and get a high fraction of peak performance Reality: Best algorithm (and its implementation) can depend strongly on the problem, computer architecture, compiler,… Best choice can depend on knowing a lot of applied mathematics and computer science How much of this can we teach? How much of this can we automate?

Examples of Automatic Performance Tuning (1) Dense BLAS Sequential PHiPAC (UCB), then ATLAS (UTK) (used in Matlab) math-atlas.sourceforge.net/ Fast Fourier Transform (FFT) & variations Sequential and Parallel FFTW (MIT) www.fftw.org Digital Signal Processing SPIRAL: www.spiral.net (CMU) Communication Collectives (UCB, UTK) Rose (LLNL), Bernoulli (Cornell), Telescoping Languages (Rice), … More projects, conferences, government reports, …

Examples of Automatic Performance Tuning (2) What do dense BLAS, FFTs, signal processing, MPI reductions have in common? Can do the tuning off-line: once per architecture, algorithm Can take as much time as necessary (hours, a week…) At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc.

Tuning Register Tile Sizes (Dense Matrix Multiply) 333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color-coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest A 2-D slice of the 3-D core matmul space, with all other generator options fixed. This shows the performance (Mflop/s) of various core matmul implementations on a small in-L2 cache workload. The “best” is the dark red square at (m0=2,k0=1,n0=8), which achieved 620 Mflop/s. This experiment was performed on the Sun Ultra-10 workstation (333 MHz Ultra II processor, 2 MB L2 cache) using the Sun cc compiler v5.0 with the flags -dalign -xtarget=native -xO5 -xarch=v8plusa. The space is discrete and highly irregular. (This is not even the worst example!) Needle in a haystack

Example: Select a Matmul Implementation

Example: Support Vector Classification Paper: Statistical Models for Empirical Search-Based Performance Tuning (International Journal of High Performance Computing Applications, 18 (1), pp. 65-94, February 2004) Richard Vuduc, James W. Demmel, and Jeff A. Bilmes.

Examples of Automatic Performance Tuning (3) What do dense BLAS, FFTs, signal processing, MPI reductions have in common? Can do the tuning off-line: once per architecture, algorithm Can take as much time as necessary (hours, a week…) At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc. Can’t always do off-line tuning Algorithm and implementation may strongly depend on data only known at run-time Ex: Sparse matrix nonzero pattern determines both best data structure and implementation of Sparse-matrix-vector-multiplication (SpMV) Part of search for best algorithm just be done (very quickly!) at run-time

Source: Accelerator Cavity Design Problem (Ko via Husbands)

Linear Programming Matrix First 20,000 columns of 4k x 1M matrix, rail4284 …

A Sparse Matrix You Encounter Every Day You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.

SpMV with Compressed Sparse Row (CSR) Storage A one-slide crash course on a basic sparse matrix data structure (compressed sparse row, or CSR, format) and a basic kernel: matrix-vector multiply. In CSR: store each non-zero value, but also need extra index for each non-zero  less than dense, but higher “rate” of storage per non-zero. Sparse-matrix-vector multiply (SpMV): no re-use of A, just of vectors x and y, taken to be dense. SpMV with CSR storage: indirect/irregular accesses to x! So what you’d like to do is: Unroll the k loop  but you need to know how many non-zeors per row Hoist y[i]  not a problem, absent aliasing Eliminate ind[i]  it’s an extra load, but you need to know the non-zero pattern Reuse elements of x  but you need to know the non-zero pattern Matrix-vector multiply kernel: y(i)  y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Matrix-vector multiply kernel: y(i)  y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]]

Example: The Difficulty of Tuning nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem FEM discretization from a structural analysis problem. Source: Matrix “raefsky”, provided by NASA via the University of Florida Sparse Matrix Collection http://www.cise.ufl.edu/research/sparse/matrices/Simon/raefsky3.html

Example: The Difficulty of Tuning nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem discretization from a structural analysis problem. Source: Matrix “raefsky”, provided by NASA via the University of Florida Sparse Matrix Collection http://www.cise.ufl.edu/research/sparse/matrices/Simon/raefsky3.html 8x8 dense substructure

Taking advantage of block structure in SpMV Bottleneck is time to get matrix from memory Only 2 flops for each nonzero in matrix Don’t store each nonzero with index, instead store each nonzero r-by-c block with index Storage drops by up to 2x, if rc >> 1, all 32-bit quantities Time to fetch matrix from memory decreases Change both data structure and algorithm Need to pick r and c Need to change algorithm accordingly In example, is r=c=8 best choice? Minimizes storage, so looks like a good idea…

Speedups on Itanium 2: The Need for Search Mflop/s Best: 4x2 [NOTE: This slide has some animation in it.] Experiment: Try all block sizes that divide 8x8—16 implementations in all. Platform: 900 MHz Itanium-2, 3.6 Gflop/s peak speed. Intel v7.0 compiler. Good speedups (4x) but at an unexpected block size (4x2). Figure taken from Im, Yelick, Vuduc, IJHPCA paper, to appear. Reference Mflop/s

Register Profile: Itanium 2 1190 Mflop/s 190 Mflop/s

SpMV Performance (Matrix #2): Generation 2 Ultra 2i - 9% 63 Mflop/s Ultra 3 - 5% 109 Mflop/s SpMV Performance (Matrix #2): Generation 2 35 Mflop/s 53 Mflop/s Pentium III - 19% Pentium III-M - 15% 96 Mflop/s 120 Mflop/s 42 Mflop/s 58 Mflop/s

Register Profiles: Sun and Intel x86 Ultra 2i - 11% 72 Mflop/s Ultra 3 - 5% 90 Mflop/s Register Profiles: Sun and Intel x86 35 Mflop/s 50 Mflop/s Pentium III - 21% Pentium III-M - 15% 108 Mflop/s 122 Mflop/s 42 Mflop/s 58 Mflop/s

SpMV Performance (Matrix #2): Generation 1 Power3 - 13% 195 Mflop/s Power4 - 14% 703 Mflop/s SpMV Performance (Matrix #2): Generation 1 100 Mflop/s 469 Mflop/s Itanium 1 - 7% Itanium 2 - 31% 225 Mflop/s 1.1 Gflop/s 103 Mflop/s 276 Mflop/s

Register Profiles: IBM and Intel IA-64 Power3 - 17% 252 Mflop/s Power4 - 16% 820 Mflop/s Register Profiles: IBM and Intel IA-64 122 Mflop/s 459 Mflop/s Itanium 1 - 8% Itanium 2 - 33% 247 Mflop/s 1.2 Gflop/s 107 Mflop/s 190 Mflop/s

Another example of tuning challenges More complicated non-zero structure in general N = 16614 NNZ = 1.1M An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection http://www.cise.ufl.edu/~davis/sparse/FIDAP/ex11.rua.html

Zoom in to top corner More complicated non-zero structure in general NNZ = 1.1M An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection http://www.cise.ufl.edu/~davis/sparse/FIDAP/ex11.rua.html

3x3 blocks look natural, but… More complicated non-zero structure in general Example: 3x3 blocking Logical grid of 3x3 cells But would lead to lots of “fill-in” Suppose we know we want to use 3x3 blocking. SPARSITY lays a logical 3x3 grid on the matrix…

Extra Work Can Improve Efficiency! More complicated non-zero structure in general Example: 3x3 blocking Logical grid of 3x3 cells Fill-in explicit zeros Unroll 3x3 block multiplies “Fill ratio” = 1.5 On Pentium III: 1.5x speedup! Actual mflop rate 1.52 = 2.25 higher The main point is that there can be a considerable pay-off for judicious choice of “fill” (r x c), but that allowing for fill makes the implementation space even more complicated. For this matrix on a Pentium III, we observed a 1.5x speedup, even after filling in an additional 50% explicit zeros. Two effects: Filling in zeros, but eliminating integer indices (overhead) (2) Quality of r x c code produced by compiler may be much better for particular r’s and c’s. In this particular example, overall data structure size stays the same, but 3x3 code is 2x faster than 1x1 code for a dense matrix stored in sparse format.

Automatic Register Block Size Selection Selecting the r x c block size Off-line benchmark Precompute Mflops(r,c) using dense A for each r x c Once per machine/architecture Run-time “search” Sample A to estimate Fill(r,c) for each r x c Run-time heuristic model Choose r, c to minimize time ~ Fill(r,c) / Mflops(r,c) Using this approach, we do choose the right block sizes for the two previous examples.

Accurate and Efficient Adaptive Fill Estimation Idea: Sample matrix Fraction of matrix to sample: s Î [0,1] Cost ~ O(s * nnz) Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals Idea: Monitor variance Cost of tuning Lower bound: convert matrix in 5 to 40 unblocked SpMVs Heuristic: 1 to 11 SpMVs

Accuracy of the Tuning Heuristics (1/4) See p. 375 of Vuduc’s thesis for matrices NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)

Accuracy of the Tuning Heuristics (2/4)

Accuracy of the Tuning Heuristics (2/4) DGEMV

Upper Bounds on Performance for blocked SpMV P = (flops) / (time) Flops = 2 * nnz(A) Lower bound on time: Two main assumptions 1. Count memory ops only (streaming) 2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge minimum access “latency” ai at Li cache & amem e.g., Saavedra-Barrera and PMaC MAPS benchmarks

Example: L2 Misses on Itanium 2 Upper bound: all source vector accesses miss Lower bound: all source vector accesses hit after first Misses measured using PAPI [Browne ’00]

Example: Bounds on Itanium 2

Example: Bounds on Itanium 2

Example: Bounds on Itanium 2

Summary of Other Performance Optimizations Optimizations for SpMV Register blocking (RB): up to 4x over CSR Variable block splitting: 2.1x over CSR, 1.8x over RB Diagonals: 2x over CSR Reordering to create dense structure + splitting: 2x over CSR Symmetry: 2.8x over CSR, 2.6x over RB Cache blocking: 2.8x over CSR Multiple vectors (SpMM): 7x over CSR And combinations… Sparse triangular solve Hybrid sparse/dense data structure: 1.8x over CSR Higher-level kernels A·AT·x, AT·A·x: 4x over CSR, 1.8x over RB A2·x: 2x over CSR, 1.5x over RB [A·x, A2·x, A3·x, .. , Ak·x] Items in bold: see Vuduc’s 455 page dissertation Other items: see collaborations with BeBOPpers 2.1x for variable block splitting is just for the FEM matrices with 2 block sizes

Example: Sparse Triangular Factor Raefsky4 (structural problem) + SuperLU + colmmd N=19779, nnz=12.6 M Dense trailing triangle: dim=2268, 20% of total nz Can be as high as 90+%! 1.8x over CSR

Cache Optimizations for AAT*x Cache-level: Interleave multiplication by A, AT Only fetch A from memory once dot product … … “axpy” Register-level: aiT to be r´c block row, or diag row

Example: Combining Optimizations (1/2) Register blocking, symmetry, multiple (k) vectors Three low-level tuning parameters: r, c, v X k v * r c += Y A

Example: Combining Optimizations (2/2) Register blocking, symmetry, and multiple vectors [Ben Lee @ UCB] Symmetric, blocked, 1 vector Up to 2.6x over nonsymmetric, blocked, 1 vector Symmetric, blocked, k vectors Up to 2.1x over nonsymmetric, blocked, k vecs. Up to 7.3x over nonsymmetric, nonblocked, 1, vector Symmetric Storage: up to 64.7% savings

Potential Impact on Applications: Omega3P Application: accelerator cavity design [Ko] Relevant optimization techniques Symmetric storage Register blocking Reordering, to create more dense blocks Reverse Cuthill-McKee ordering to reduce bandwidth Do Breadth-First-Search, number nodes in reverse order visited Traveling Salesman Problem-based ordering to create blocks Nodes = columns of A Weights(u, v) = no. of nz u, v have in common Tour = ordering of columns Choose maximum weight tour See [Pinar & Heath ’97] 2.1x speedup on Power 4 (but SPMV not dominant in app)

Source: Accelerator Cavity Design Problem (Ko via Husbands)

Post-RCM Reordering

100x100 Submatrix Along Diagonal

“Microscopic” Effect of RCM Reordering Before: Green + Red After: Green + Blue

“Microscopic” Effect of Combined RCM+TSP Reordering Before: Green + Red After: Green + Blue

(Omega3P)

Optimized Sparse Kernel Interface - OSKI Provides sparse kernels automatically tuned for user’s matrix & machine BLAS-style functionality: SpMV, Ax & ATy, TrSV Hides complexity of run-time tuning Includes new, faster locality-aware kernels: ATAx, Akx Faster than standard implementations Up to 4x faster matvec, 1.8x trisolve, 4x ATA*x For “advanced” users & solver library writers Available as stand-alone library (OSKI 1.0.1h, 6/07) Available as PETSc extension (OSKI-PETSc .1d, 3/06) Bebop.cs.berkeley.edu/oski Library interface defines low-level primitives in the style of the Sparse BLAS: sparse matrix-vector multiply, sparse triangular solve. Matrix-vector multiply touches each matrix element only once, whereas our locality-aware kernels can reuse these elements. The BeBOP library includes these kernels: (1) simultaneous computation of A*x, AT*z (2) AT*A*x (3) Ak*x, k is non-negative integer Unlike tuning in the dense case, sparse tuning must occur at run-time since the matrix is unknown until then. Here, the “standard implementation” stores the matrix in compressed sparse row (CSR) format with the kernel coded in C or Fortran compiled with full optimizations. To maximize the impact of our software, we are implementing a new “automatically tuned” matrix type in PETSc. Most PETSc users will be able to use our library with little or no modifications to their source code. The stand-alone version of our library is C and Fortran-callable. “Advanced users” means users willing to program at the level of the BLAS.

How the OSKI Tunes (Overview) Library Install-Time (offline) Application Run-Time 1. Build for Target Arch. 2. Benchmark Workload from program monitoring History Matrix Diagram key: Ovals == actions taken by library Cylinders == data stored by or with the library Solid arrows == control flow Dashed arrows == “data” flow :: At library-installation time :: 1. “Build”: Pre-compile source code. Possible code variants are stored in dynamic libraries (“Code Variants”). 2. “Benchmark”: The installation process measures the speed of the possible code variants and stores the results (“Benchmark Data”) The entire build process uses standard & portable GNU configure. :: At run-time, from within the user’s application :: 1. “Matrix from user”: The user passes her pre-assembled matrix in a standard format like compressed sparse row (CSR) or column (CSC), and the library 2. “Evaluate models”: The library contains a list of “heuristic models.” Each model is actually a procedure that analyzes the matrix, workload, and benchmarking data and chooses a data structure & code it thinks is the best for that matrix and workload. A model is typically specialized to predict tuning parameters for a particular kernel & class of data structures (e.g., predict the block size for register blocked matvec). However, higher-level models (meta-models) that combine several heuristics or predict over several possible data structures and kernels are also possible. In the initial implementation, “Evaluate Models” does the following: * Based on the workload, decide on an allowable amount of time for tuning (a “tuning budget”) * WHILE there is time left for tuning DO - Select and evaluate a model to get best predicted performance & corresponding tuning parameters Generated code variants Benchmark data 1. Evaluate Models Heuristic models 2. Select Data Struct. & Code To user: Matrix handle for kernel calls Extensibility: Advanced users may write & dynamically add “Code variants” and “Heuristic models” to system.

How the OSKI Tunes (Overview) At library build/install-time Pre-generate and compile code variants into dynamic libraries Collect benchmark data Measures and records speed of possible sparse data structure and code variants on target architecture Installation process uses standard, portable GNU AutoTools At run-time Library “tunes” using heuristic models Models analyze user’s matrix & benchmark data to choose optimized data structure and code Non-trivial tuning cost: up to ~40 mat-vecs Library limits the time it spends tuning based on estimated workload provided by user or inferred by library User may reduce cost by saving tuning results for application on future runs with same or similar matrix The library build & installation process automatically benchmarks the target machine to determine how fast different kinds of data structures and kernel implementations can potentially run. These variants are pre-determined and generated using specialized code generators. Run-time tuning has a non-trivial cost, and the amount of time the library allows itself for tuning depends on the expected workload (e.g., how frequently matrix-vector multiply will be called). This workload may be specified by the user, or may be inferred by the library which transparently monitors how often different kernels are called (self-profiling).

Optimizations in OSKI, so far Fully automatic heuristics for Sparse matrix-vector multiply Register-level blocking Register-level blocking + symmetry + multiple vectors Cache-level blocking Sparse triangular solve with register-level blocking and “switch-to-dense” optimization Sparse ATA*x with register-level blocking User may select other optimizations manually Diagonal storage optimizations, reordering, splitting; tiled matrix powers kernel (Ak*x) All available in dynamic libraries Accessible via high-level embedded script language “Plug-in” extensibility Very advanced users may write their own heuristics, create new data structures/code variants and dynamically add them to the system The automatic heuristics have been discussed in published papers available on the BeBOP home page. The BeBOP group has studied a number of other optimizations. Code for these optimizations will be compiled and available to the user. (See “BeBOP-Lua” in our design document.)

How to Call OSKI: Basic Usage May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); OSKI is designed to allow users to migrate their applications gradually to use OSKI. The code fragment shown above represents a sample C application which already has a CSR matrix implementation (ptr, ind, val) and dense vectors (x, y), initialized in some way. The application computes matrix-vector multiply 500 times in a loop. Indeed, OSKI does not provide a way to construct/assemble a matrix, so a user always has to create her matrix in one of OSKI’s supported base formats (CSR, CSC) first.

How to Call OSKI: Basic Usage May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); To use OSKI, the user should first provide OSKI with the semantics of her existing data structures, here shown by three calls for the three data structures (the matrix + the two vectors). OSKI returns handles to these objects, and in this example, will “share” these objects with the user. The key point here is that if the user only introduced this change, the application would execute just as it did before: OSKI does not modify the user’s data structures unless the user asks it to via OSKI’s “set non-zero value” routines. The call to “oski_CreateMatCSR” defines the semantics of this matrix, e.g., it’s logical dimensions. The user specifies other semantic information (e.g., 0-based indices vs. 1-based indices, symmetry with half or full storage, Hermitian-ness, lower/upper triangular shape, implicit unit diagonal) where the “...” are shown. The calls to “oski_CreateVecView” creates a wrapper around the user’s data structure. Vectors may have non-unit strides, for example. Moreover, there is an “oski_CreateMultiVecView” operation which will wrap a dense matrix data structure. In that case, the user can specify whether she has used row vs. column major storage and what the stride is. Recursive dense data structures are not supported at this time.

How to Call OSKI: Basic Usage May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);/* Step 2 */ Of course, if you’re using OSKI at all, you probably want to call OSKI kernels. Here, the user has replaced her matrix-vector multiply routine with the corresponding OSKI call. Notice that OSKI kernel function signatures mimic the BLAS. Recall that the user must explicitly ask OSKI to tune, so in this example, no tuning occurs. OSKI has the semantics of the user’s matrix data structure (ptr, ind, val) and vector data structures (x, y), and so can perform the SpMV accordingly. Question: Do users want other base formats besides CSR, CSC? E.g., JAD for legacy apps written for vector architectures?

How to Call OSKI: Tune with Explicit Hints User calls “tune” routine May provide explicit tuning hints (OPTIONAL) oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ /* Tell OSKI we will call SpMV 500 times (workload hint) */ oski_SetHintMatMult(A_tunable, OP_NORMAL, , x_view, , y_view, 500); /* Tell OSKI we think the matrix has 8x8 blocks (structural hint) */ oski_SetHint(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); This slide shows one style of tuning in which the user provides explicit hints. These hints are optional. The call to “oski_TuneMat” marks the point in program execution at which the application might incur the 40 SpMV tuning cost. You can substitute symbolic vectors (defined by library constants) instead of actual vectors (x_view, y_view).

How the User Calls OSKI: Implicit Tuning Ask library to infer workload Library profiles all kernel calls May periodically re-tune What if you don’t have any hints, or don’t want to provide any? It’s OK to call “tune” repeatedly, as done in this example. Each call to “tune”, OSKI will check to see whether there is heavy use of particular kernels, in which case it might decide to tune. In cases when it does not, the call is essentially a no-op. This mode also provides a way to re-tune periodically, though our current implementation does not do that. oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ for( i = 0; i < 500; i++ ) { oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ }

Quick-and-dirty Parallelism: OSKI-PETSc Extend PETSc’s distributed memory SpMV (MATMPIAIJ) PETSc Each process stores diag (all-local) and off-diag submatrices OSKI-PETSc: Add OSKI wrappers Each submatrix tuned independently p0 p1 p2 p3

OSKI-PETSc Proof-of-Concept Results Matrix 1: Accelerator cavity design (R. Lee @ SLAC) N ~ 1 M, ~40 M non-zeros 2x2 dense block substructure Symmetric Matrix 2: Linear programming (Italian Railways) Short-and-fat: 4k x 1M, ~11M non-zeros Highly unstructured Big speedup from cache-blocking: no native PETSc format Evaluation machine: Xeon cluster Peak: 4.8 Gflop/s per node Proof-of-concept experiment to show OSKI-PETSc implementation “works” (I.e., doesn’t change scaling behavior you’d expect from your current PETSc code)

Accelerator Cavity Matrix Roughly 90% of non-zeros in the blocks along the diagonal

OSKI-PETSc Performance: Accel. Cavity PETSc @ p=1: 234 Mflop/s OSKI-PETSc @ p=1: 315 Mflop/s OSKI-PETSc (symm) @ p=1: 480 Mflop/s OSKI-PETSc speedup @ p=8: 6.2

Linear Programming Matrix First 20,000 columns… …

OSKI-PETSc Performance: LP Matrix Note the single-process improvement from cache blocking: 105 Mflop/s vs. 260 Mflop/s (2.48x speedup) Max perf of any implementation occurs @ p=8: 315 Mflop/s Poor scaling due to significant off-process communication required.

Tuning Higher Level Algorithms than SpMV We almost always do many SpMVs, not just one “Krylov Subspace Methods” (KSMs) for Ax=b, Ax = λx Conjugate Gradients, GMRES, Lanczos, … Do a sequence of k SpMVs to get vectors [x1 , … , xk] Find best solution x as linear combination of [x1 , … , xk] Main cost is k SpMVs Since communication usually dominates, can we do better? Goal: make communication cost independent of k Parallel case: O(log P) messages, not O(k log P) - optimal same bandwidth as before Sequential case: O(1) messages and bandwidth, not O(k) - optimal Achievable when matrix partitionable with low surface-to-volume ratio

Locally Dependent Entries for [x,Ax], A tridiagonal 2 processors Proc 1 Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication

Locally Dependent Entries for [x,Ax,A2x], A tridiagonal 2 processors Proc 1 Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication

Locally Dependent Entries for [x,Ax,…,A3x], A tridiagonal 2 processors Proc 1 Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication

Locally Dependent Entries for [x,Ax,…,A4x], A tridiagonal 2 processors Proc 1 Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication

Locally Dependent Entries for [x,Ax,…,A8x], A tridiagonal 2 processors Proc 1 Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication k=8 fold reuse of A

Remotely Dependent Entries for [x,Ax,…,A8x], A tridiagonal 2 processors Proc 1 Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x One message to get data needed to compute remotely dependent entries, not k=8 Minimizes number of messages = latency cost Price: redundant work  “surface/volume ratio”

Fewer Remotely Dependent Entries for [x,Ax,…,A8x], A tridiagonal 2 processors Proc 1 Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Reduce redundant work by half

Remotely Dependent Entries for [x,Ax, A2x,A3x], 2D Laplacian

Remotely Dependent Entries for [x,Ax,A2x,A3x], A irregular, multiple processors Red needed for k=1 Green also needed for k=2 Blue also needed for k=3

Sequential [x,Ax,…,A4x], with memory hierarchy One read of matrix from slow memory, not k=4 Minimizes words moved = bandwidth cost No redundant work

Performance Results Measured Multicore (Clovertown) speedups up to 6.4x Measured/Modeled sequential OOC speedup up to 3x Modeled parallel Petascale speedup up to 6.9x Modeled parallel Grid speedup up to 22x Sequential speedup due to bandwidth, works for many problem sizes Parallel speedup due to latency, works for smaller problems on many processors

Speedups on Intel Clovertown (8 core)

Optimizing Communication Complexity of Sparse Solvers Need to modify high level algorithms to use new kernel Example: GMRES for Ax=b where A= 2D Laplacian x lives on n-by-n mesh Partitioned on p½ -by- p½ processor grid A has “5 point stencil” (Laplacian) (Ax)(i,j) = linear_combination(x(i,j), x(i,j±1), x(i±1,j)) Ex: 18-by-18 mesh on 3-by-3 processor grid

Minimizing Communication What is the cost = (#flops, #words, #mess) of s steps of standard GMRES? GMRES, ver.1: for i=1 to s w = A * v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H n/p½ n/p½ Cost(A * v) = s * (9n2 /p, 4n / p½ , 4 ) Cost(MGS = Modified Gram-Schmidt) = s2/2 * ( 4n2 /p , log p , log p ) Total cost ~ Cost( A * v ) + Cost (MGS) Can we reduce the latency?

Minimizing Communication Cost(GMRES, ver.1) = Cost(A*v) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 4s ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from A*v can you avoid? Almost all GMRES, ver. 2: W = [ v, Av, A2v, … , Asv ] [Q,R] = MGS(W) Build H from R, solve LSQ problem s = 3 Cost(W) = ( ~ same, ~ same , 8 ) Latency cost independent of s – optimal Cost (MGS) unchanged Can we reduce the latency more?

Minimizing Communication Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all Cost(TSQR) = ( ~ same, ~ same , log p ) Latency cost independent of s - optimal GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234

Minimizing Communication Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234 Cost(TSQR) = ( ~ same, ~ same , log p ) Oops

Minimizing Communication Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234 Cost(TSQR) = ( ~ same, ~ same , log p ) Oops – W from power method, precision lost!

Minimizing Communication Cost(GMRES, ver. 3) = Cost(W) + Cost(TSQR) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , log p ) Latency cost independent of s, just log p – optimal Oops – W from power method, so precision lost – What to do? Use a different polynomial basis Not Monomial basis W = [v, Av, A2v, …], instead … Newton Basis WN = [v, (A – θ1 I)v , (A – θ2 I)(A – θ1 I)v, …] or Chebyshev Basis WC = [v, T1(v), T2(v), …]

Extensions Other Krylov methods Preconditioning Arnoldi, CG, Lanczos, … Preconditioning Solve MAx=Mb where preconditioning matrix M chosen to make system “easier” M approximates A-1 somehow, but cheaply, to accelerate convergence Cheap as long as contributions from “distant” parts of the system can be compressed Sparsity Low rank

Design Space for [x,Ax,…,Akx] (1/3) Mathematical Operation Keep all vectors Krylov Subspace Methods Improving conditioning of basis W = [x, p1(A)x, p2(A)x,…,pk(A)x] pi(A) = degree i polynomial chosen to reduce cond(W) Preconditioning (Ay=b  MAy=Mb) [x,Ax,MAx,AMAx,MAMAx,…,(MA)kx] Keep last vector Akx only Jacobi, Gauss Seidel

Design Space for [x,Ax,…,Akx] (2/3) Representation of sparse A Zero pattern may be explicit or implicit Nonzero entries may be explicit or implicit Implicit  save memory, communication Explicit pattern Implicit pattern Explicit nonzeros General sparse matrix Image segmentation Implicit nonzeros Laplacian(graph), for graph partitioning “Stencil matrix” Ex: tridiag(-1,2,-1) Representation of dense preconditioners M Low rank off-diagonal blocks (semiseparable)

Design Space for [x,Ax,…,Akx] (3/3) Parallel implementation From simple indexing, with redundant flops  surface/volume ratio To complicated indexing, with fewer redundant flops Sequential implementation Depends on whether vectors fit in fast memory Reordering rows, columns of A Important in parallel and sequential cases Can be reduced to pair of Traveling Salesmen Problems Plus all the optimizations for one SpMV!

Summary Communication-Avoiding Linear Algebra (CALA) Lots of related work Some going back to 1960’s Reports discuss this comprehensively, not here Our contributions Several new algorithms, improvements on old ones Unifying parallel and sequential approaches to avoiding communication Time for these algorithms has come, because of growing communication costs Why avoid communication just for linear algebra motifs?

Possible Class Projects Come to BEBOP meetings (T 9 – 10:30, 606 Soda) Incorporate multicore optimizations into OSKI Experiment with SpMV on GPU Which optimizations are most effective? Try to speed up particular matrices of interest Data mining Experiment with new [x,Ax,…,Akx] kernel GPU, multicore, distributed memory On matrices of interest Experiment with new solvers using this kernel

Extra Slides

Tuning Higher Level Algorithms So far we have tuned a single sparse matrix kernel y = AT*A*x motivated by higher level algorithm (SVD) What can we do by extending tuning to a higher level? Consider Krylov subspace methods for Ax=b, Ax = lx Conjugate Gradients (CG), GMRES, Lanczos, … Inner loop does y=A*x, dot products, saxpys, scalar ops Inner loop costs at least O(1) messages k iterations cost at least O(k) messages Our goal: show how to do k iterations with O(1) messages Possible payoff – make Krylov subspace methods much faster on machines with slow networks Memory bandwidth improvements too (not discussed) Obstacles: numerical stability, preconditioning, …

Krylov Subspace Methods for Solving Ax=b Compute a basis for a subspace V by doing y = A*x k times Find “best” solution in that Krylov subspace V Given starting vector x1, V spanned by x2 = A*x1, x3 = A*x2 , … , xk = A*xk-1 GMRES finds an orthogonal basis of V by Gram-Schmidt, so it actually does a different set of SpMVs than in last bullet

Example: Standard GMRES r = b - A*x1, b = length(r), v1 = r / b … length(r) = sqrt(S ri2 ) for m= 1 to k do w = A*vm … at least O(1) messages for i = 1 to m do … Gram-Schmidt him = dotproduct(vi , w ) … at least O(1) messages, or log(p) w = w – h im * vi end for hm+1,m = length(w) … at least O(1) messages, or log(p) vm+1 = w / hm+1,m find y minimizing length( Hk * y – be1 ) … small, local problem new x = x1 + Vk * y … Vk = [v1 , v2 , … , vk ] Dot product and length actually need O(log p) messages, so O(k^2 * log p) messages, CG would be O(k * log p) O(k2), or O(k2 log p), messages altogether

Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal Different basis for same Krylov subspace What can Proc 1 compute without communication? Proc 1 Proc 2 (A8x)(1:30): . . . (A2x)(1:30): (Ax)(1:30): x(1:30):

Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal Computing missing entries with 1 communication, redundant work Proc 1 Proc 2 (A8x)(1:30): . . . (A2x)(1:30): (Ax)(1:30): x(1:30):

Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal Saving half the redundant work Proc 1 Proc 2 (A8x)(1:30): . . . (A2x)(1:30): (Ax)(1:30): x(1:30):

Example: Computing [Ax,A2x,A3x,…,Akx] for Laplacian A = 5pt Laplacian in 2D, Communicated point for k=3 shown

Latency-Avoiding GMRES (1) r = b - A*x1, b = length(r), v1 = r / b … O(log p) messages Wk+1 = [v1 , A * v1 , A2 * v1 , … , Ak * v1 ] … O(1) messages [Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages Hk = R(:, 2:k+1) * (R(1:k,1:k))-1 … small, local problem find y minimizing length( Hk * y – be1 ) … small, local problem new x = x1 + Qk * y … local problem O(log p) messages altogether Independent of k

Latency-Avoiding GMRES (2) [Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages Easy, but not so stable way to do it: X(myproc) = Wk+1T(myproc) * Wk+1 (myproc) … local computation Y = sum_reduction(X(myproc)) … O(log p) messages … Y = Wk+1T* Wk+1 R = (cholesky(Y))T … small, local computation Q(myproc) = Wk+1 (myproc) * R-1 … local computation

Numerical example (1) Diagonal matrix with n=1000, Aii from 1 down to 10-5 Instability as k grows, after many iterations By using higher precision, the instabilities start later and later.

Partial remedy: restarting periodically (every 120 iterations) Numerical Example (2) Partial remedy: restarting periodically (every 120 iterations) Other remedies: high precision, different basis than [v , A * v , … , Ak * v ]

Operation Counts for [Ax,A2x,A3x,…,Akx] on p procs Problem Per-proc cost Standard Optimized 1D mesh #messages 2k 2 (tridiagonal) #words sent #flops 5kn/p 5kn/p + 5k2 memory (k+4)n/p (k+4)n/p + 8k Assume problem large so k << n/p in 1D case, k << n/p1/3 in 3D case 3D mesh #messages 26k 26 27 pt stencil #words sent 6kn2p-2/3 + 12knp-1/3 + O(k) 6kn2p-2/3 + 12k2np-1/3 + O(k3) #flops 53kn3/p 53kn3/p + O(k2n2p-2/3) memory (k+28)n3/p+ 6n2p-2/3 + O(np-1/3) (k+28)n3/p + 168kn2p-2/3 + O(k2np-1/3)

Summary and Future Work Dense LAPACK ScaLAPACK Communication primitives Sparse Kernels, Stencils Higher level algorithms All of the above on new architectures Vector, SMPs, multicore, Cell, … High level support for tuning Specification language Integration into compilers

Extra Slides

A Sparse Matrix You Encounter Every Day Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your e-mail inbox.) You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.

What about the Google Matrix? Google approach Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix At query-time: return list of pages ordered by rank Matrix: A = aG + (1-a)(1/n)uuT Markov model: Surfer follows link with probability a, jumps to a random page with probability 1-a G is n x n connectivity matrix [n » billions] gij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) u is a vector of all 1 values Steady-state probability xi of landing on page i is solution to x = Ax Approximate x by power method: x = Akx0 In practice, k » 25 The matrix to which Google applies the power method is the sum of the connectivity graph matrix G plus a rank one update. According to Cleve’s corner, alpha ~= 85. Macroscopic connectivity structure: Kumar, et al. “The web as a graph.” PODS 2000. http://citeseer.nj.nec.com/290635.html Size of G as of May ’02: Cleve’s corner. “The world’s largest sparse matrix computation.” http://www.mathworks.com/company/newsletter/clevescorner/oct02_cleve.shtml “k = 25”: Taher Haveliwala @ Stanford (student of Jeff Ullman), private communication: “For 120M pages, [the computation takes] about 5 hours for 25 iterations, no parallelization. The limiting factor is the disk speed for sequentially reading in the 8 GB matrix (it is assumed to be larger than main memory).” [13 Mar ’02]

Current Work Public software release Impact on library designs: Sparse BLAS, Trilinos, PETSc, … Integration in large-scale applications DOE: Accelerator design; plasma physics Geophysical simulation based on Block Lanczos (ATA*X; LBL) Systematic heuristics for data structure selection? Evaluation of emerging architectures Revisiting vector micros Other sparse kernels Matrix triple products, Ak*x Parallelism Sparse benchmarks (with UTK) [Gahvari & Hoemmen] Automatic tuning of MPI collective ops [Nishtala, et al.]

Summary of High-Level Themes “Kernel-centric” optimization Vs. basic block, trace, path optimization, for instance Aggressive use of domain-specific knowledge Performance bounds modeling Evaluating software quality Architectural characterizations and consequences Empirical search Hybrid on-line/run-time models Statistical performance models Exploit information from sampling, measuring

Related Work My bibliography: 337 entries so far Sample area 1: Code generation Generative & generic programming Sparse compilers Domain-specific generators Sample area 2: Empirical search-based tuning Kernel-centric linear algebra, signal processing, sorting, MPI, … Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self-tuning compilers, continuous program optimization

Future Directions (A Bag of Flaky Ideas) Composable code generators and search spaces New application domains PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels rich mathematical structure germane to performance; lots of hardware New tuning environments Parallel, Grid, “whole systems” Statistical models of application performance Statistical learning of concise parametric models from traces for architectural evaluation Compiler/automatic derivation of parametric models

Possible Future Work Different Architectures Different Kernels New FP instruction sets (SSE2) SMP / multicore platforms Vector architectures Different Kernels Higher Level Algorithms Parallel versions of kenels, with optimized communication Block algorithms (eg Lanczos) XBLAS (extra precision) Produce Benchmarks Augment HPCC Benchmark Make it possible to combine optimizations easily for any kernel Related tuning activities (LAPACK & ScaLAPACK)

Review of Tuning by Illustration (Extra Slides)

Splitting for Variable Blocks and Diagonals Decompose A = A1 + A2 + … At Detect “canonical” structures (sampling) Split Tune each Ai Improve performance and save storage New data structures Unaligned block CSR Relax alignment in rows & columns Row-segmented diagonals

Example: Variable Block Row (Matrix #12) over CSR 1.8x over RB

Example: Row-Segmented Diagonals over CSR This slide is animated.

Mixed Diagonal and Block Structure

Summary Automated block size selection Not fully automated Empirical modeling and search Register blocking for SpMV, triangular solve, ATA*x Not fully automated Given a matrix, select splittings and transformations Lots of combinatorial problems TSP reordering to create dense blocks (Pinar ’97; Moon, et al. ’04) TO DO: Redo!

Extra Slides

A Sparse Matrix You Encounter Every Day Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your e-mail inbox.) You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.

Problem Context Sparse kernels abound Historical trends Models of buildings, cars, bridges, economies, … Google PageRank algorithm Historical trends Sparse matrix-vector multiply (SpMV): 10% of peak 2x faster with “hand-tuning” Tuning becoming more difficult over time Promise of automatic tuning: PHiPAC/ATLAS, FFTW, … Challenges to high-performance Not dense linear algebra! Complex data structures: indirect, irregular memory access Performance depends strongly on run-time inputs Historical trends: we’ll look at the data that supports these claims shortly

Key Questions, Ideas, Conclusions How to tune basic sparse kernels automatically? Empirical modeling and search Up to 4x speedups for SpMV 1.8x for triangular solve 4x for ATA*x; 2x for A2*x 7x for multiple vectors What are the fundamental limits on performance? Kernel-, machine-, and matrix-specific upper bounds Achieve 75% or more for SpMV, limiting low-level tuning Consequences for architecture? General techniques for empirical search-based tuning? Statistical models of performance Reference implementations based on CSR format (to be discussed)

Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance

Compressed Sparse Row (CSR) Storage A one-slide crash course on a basic sparse matrix data structure (compressed sparse row, or CSR, format) and a basic kernel: matrix-vector multiply. In CSR: store each non-zero value, but also need extra index for each non-zero  less than dense, but higher “rate” of storage per non-zero. Matrix-vector multiply (MVM): no re-use of A, just of vectors x and y, taken to be dense. MVM with CSR storage: indirect/irregular accesses to x! So what you’d like to do is: Unroll the k loop  but you need to know how many non-zeors per row Hoist y[i]  not a problem, absent aliasing Eliminate ind[i]  it’s an extra load, but you need to know the non-zero pattern Reuse elements of x  but you need to know the non-zero pattern Matrix-vector multiply kernel: y(i)  y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Matrix-vector multiply kernel: y(i)  y(i) + A(i,j)*x(j) Matrix-vector multiply kernel: y(i)  y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]]

Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance

Historical Trends in SpMV Performance The Data Uniprocessor SpMV performance since 1987 “Untuned” and “Tuned” implementations Cache-based superscalar micros; some vectors LINPACK

SpMV Historical Trends: Mflop/s

Example: The Difficulty of Tuning nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem FEM discretization from a structural analysis problem. Source: Matrix “olafu”, provided by NASA via the University of Florida Sparse Matrix Collection http://www.cise.ufl.edu/~davis/sparse/Simon/olafu.rua.html

Still More Surprises More complicated non-zero structure in general An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection http://www.cise.ufl.edu/~davis/sparse/FIDAP/ex11.rua.html

Still More Surprises More complicated non-zero structure in general Example: 3x3 blocking Logical grid of 3x3 cells Suppose we know we want to use 3x3 blocking. SPARSITY lays a logical 3x3 grid on the matrix…

Historical Trends: Mixed News Observations Good news: Moore’s law like behavior Bad news: “Untuned” is 10% peak or less, worsening Good news: “Tuned” roughly 2x better today, and improving Bad news: Tuning is complex (Not really news: SpMV is not LINPACK) Questions Application: Automatic tuning? Architect: What machines are good for SpMV?

Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques SpMV [SC’02; IJHPCA ’04b] Sparse triangular solve (SpTS) [ICS/POHLL ’02] ATA*x [ICCS/WoPLA ’03] Upper bounds on performance Statistical models of performance

SPARSITY: Framework for Tuning SpMV SPARSITY: Automatic tuning for SpMV [Im & Yelick ’99] General approach Identify and generate implementation space Search space using empirical models & experiments Prototype library and heuristic for choosing register block size Also: cache-level blocking, multiple vectors What’s new? New block size selection heuristic Within 10% of optimal — replaces previous version Expanded implementation space Variable block splitting, diagonals, combinations New kernels: sparse triangular solve, ATA*x, Ar*x

Automatic Register Block Size Selection Selecting the r x c block size Off-line benchmark: characterize the machine Precompute Mflops(r,c) using dense matrix for each r x c Once per machine/architecture Run-time “search”: characterize the matrix Sample A to estimate Fill(r,c) for each r x c Run-time heuristic model Choose r, c to maximize Mflops(r,c) / Fill(r,c) Run-time costs Up to ~40 SpMVs (empirical worst case) Using this approach, we do choose the right block sizes for the two previous examples.

Accuracy of the Tuning Heuristics (1/4) DGEMV NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)

Accuracy of the Tuning Heuristics (2/4) DGEMV

Accuracy of the Tuning Heuristics (3/4) DGEMV

Accuracy of the Tuning Heuristics (4/4) DGEMV

Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance SC’02 Statistical models of performance

Motivation for Upper Bounds Model Questions Speedups are good, but what is the speed limit? Independent of instruction scheduling, selection What machines are “good” for SpMV?

Upper Bounds on Performance: Blocked SpMV P = (flops) / (time) Flops = 2 * nnz(A) Lower bound on time: Two main assumptions 1. Count memory ops only (streaming) 2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge min access “latency” ai at Li cache & amem e.g., Saavedra-Barrera and PMaC MAPS benchmarks

Example: Bounds on Itanium 2

Example: Bounds on Itanium 2

Example: Bounds on Itanium 2

Fraction of Upper Bound Across Platforms

Achieved Performance and Machine Balance Machine balance [Callahan ’88; McCalpin ’95] Balance = Peak Flop Rate / Bandwidth (flops / double) Ideal balance for mat-vec: £ 2 flops / double For SpMV, even less SpMV ~ streaming 1 / (avg load time to stream 1 array) ~ (bandwidth) “Sustained” balance = peak flops / model bandwidth

Where Does the Time Go? Most time assigned to memory Caches “disappear” when line sizes are equal Strictly increasing line sizes

Execution Time Breakdown: Matrix 40 TODO: Update this figure from thesis

Speedups with Increasing Line Size

Summary: Performance Upper Bounds What is the best we can do for SpMV? Limits to low-level tuning of blocked implementations Refinements? What machines are good for SpMV? Partial answer: balance characterization Architectural consequences? Example: Strictly increasing line sizes

Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance FDO ’00; IJHPCA ’04a

Statistical Models for Automatic Tuning Idea 1: Statistical criterion for stopping a search A general search model Generate implementation Measure performance Repeat Stop when probability of being within e of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models Problem: Choose 1 among m implementations at run-time Sample performance off-line, build statistical model

Example: Select a Matmul Implementation

Example: Support Vector Classification

Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance Summary and Future Work

Summary of High-Level Themes “Kernel-centric” optimization Vs. basic block, trace, path optimization, for instance Aggressive use of domain-specific knowledge Performance bounds modeling Evaluating software quality Architectural characterizations and consequences Empirical search Hybrid on-line/run-time models Statistical performance models Exploit information from sampling, measuring

Related Work My bibliography: 337 entries so far Sample area 1: Code generation Generative & generic programming Sparse compilers Domain-specific generators Sample area 2: Empirical search-based tuning Kernel-centric linear algebra, signal processing, sorting, MPI, … Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self-tuning compilers, continuous program optimization

Future Directions (A Bag of Flaky Ideas) Composable code generators and search spaces New application domains PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels rich mathematical structure germane to performance; lots of hardware New tuning environments Parallel, Grid, “whole systems” Statistical models of application performance Statistical learning of concise parametric models from traces for architectural evaluation Compiler/automatic derivation of parametric models

Acknowledgements Super-advisors: Jim and Kathy Undergraduate R.A.s: Attila, Ben, Jen, Jin, Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh See pages xvi—xvii of dissertation.

TSP-based Reordering: Before (Pinar ’97; Moon, et al ‘04)

TSP-based Reordering: After (Pinar ’97; Moon, et al ‘04) Up to 2x speedups over CSR

Example: L2 Misses on Itanium 2 Misses measured using PAPI [Browne ’00]

Example: Distribution of Blocked Non-Zeros TO DO: Replace this with ex11 spy plot

Register Profile: Itanium 2 1190 Mflop/s 190 Mflop/s

Register Profiles: Sun and Intel x86 Ultra 2i - 11% 72 Mflop/s Ultra 3 - 5% 90 Mflop/s Register Profiles: Sun and Intel x86 35 Mflop/s 50 Mflop/s Pentium III - 21% Pentium III-M - 15% 108 Mflop/s 122 Mflop/s 42 Mflop/s 58 Mflop/s

Register Profiles: IBM and Intel IA-64 Power3 - 17% 252 Mflop/s Power4 - 16% 820 Mflop/s Register Profiles: IBM and Intel IA-64 122 Mflop/s 459 Mflop/s Itanium 1 - 8% Itanium 2 - 33% 247 Mflop/s 1.2 Gflop/s 107 Mflop/s 190 Mflop/s

Accurate and Efficient Adaptive Fill Estimation Idea: Sample matrix Fraction of matrix to sample: s Î [0,1] Cost ~ O(s * nnz) Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals Idea: Monitor variance Cost of tuning Lower bound: convert matrix in 5 to 40 unblocked SpMVs Heuristic: 1 to 11 SpMVs

Sparse/Dense Partitioning for SpTS Partition L into sparse (L1,L2) and dense LD: Perform SpTS in three steps: Sparsity optimizations for (1)—(2); DTRSV for (3) Tuning parameters: block size, size of dense triangle

SpTS Performance: Power3

Summary of SpTS and AAT*x Results SpTS — Similar to SpMV 1.8x speedups; limited benefit from low-level tuning AATx, ATAx Cache interleaving only: up to 1.6x speedups Reg + cache: up to 4x speedups 1.8x speedup over register only Similar heuristic; same accuracy (~ 10% optimal) Further from upper bounds: 60—80% Opportunity for better low-level tuning a la PHiPAC/ATLAS Matrix triple products? Ak*x? Preliminary work

Register Blocking: Speedup

Register Blocking: Performance

Register Blocking: Fraction of Peak

Example: Confidence Interval Estimation

Costs of Tuning

Splitting + UBCSR: Pentium III

Splitting + UBCSR: Power4

Splitting+UBCSR Storage: Power4

Example: Variable Block Row (Matrix #13)

Dense Tuning is Hard, Too Even dense matrix multiply can be notoriously difficult to tune

2-D cross section of 3-D space. Dense matrix multiply: surprising performance as register tile size varies.

Preliminary Results (Matrix Set 2): Itanium 2 Max speedups are several times faster than on earlier architectures (c.f., 200 Mflop/s on Itanium 1-800, 150 Mflop/s on Power 3, 300 Mflop/s on Pentium 4-1.5 GHz). Categories: Matrix 0-2: dense, dense lower triangle, dense upper triangle Matrix 3-18: FEM (uniform blocking) Matrix 19-38: FEM (variable blocking Matrix 39-42: Protein modeling Matrix 43: Quantum chromodynamics calculation Matrix 44-46: Chemistry and chemical engineering Matrix 47-49: Circuits Matrix 50-57: Macroeconomic modeling Matrix 58-66: Statistical modeling Matrix 67-68: Finite-field problems Matrix 69-77: Linear programming Matrix 78-81: Information retrieval Web/IR Dense FEM FEM (var) Bio Econ Stat LP

Multiple Vector Performance On a dense matrix in sparse format, with multiple vectors we can get close to peak machine speed (up to 3 Gflop/s, peak on this machine is 3.6 Gflop/s).

“40: webdoc” is the LSI matrix “40: webdoc” is the LSI matrix. (Figure taken from IJHPCA paper, to appear) “41—42” are linear programming matrices. 43 is also linear programming (Scheduling problem from Italian Railways) NOTE: Matrix numbers (40—44) not consistent with previous slide.

What about the Google Matrix? Google approach Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix At query-time: return list of pages ordered by rank Matrix: A = aG + (1-a)(1/n)uuT Markov model: Surfer follows link with probability a, jumps to a random page with probability 1-a G is n x n connectivity matrix [n » 3 billion] gij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) u is a vector of all 1 values Steady-state probability xi of landing on page i is solution to x = Ax Approximate x by power method: x = Akx0 In practice, k » 25 The matrix to which Google applies the power method is the sum of the connectivity graph matrix G plus a rank one update. According to Cleve’s corner, alpha ~= 85. Macroscopic connectivity structure: Kumar, et al. “The web as a graph.” PODS 2000. http://citeseer.nj.nec.com/290635.html Size of G as of May ’02: Cleve’s corner. “The world’s largest sparse matrix computation.” http://www.mathworks.com/company/newsletter/clevescorner/oct02_cleve.shtml “k = 25”: Taher Haveliwala @ Stanford (student of Jeff Ullman), private communication: “For 120M pages, [the computation takes] about 5 hours for 25 iterations, no parallelization. The limiting factor is the disk speed for sequentially reading in the 8 GB matrix (it is assumed to be larger than main memory).” [13 Mar ’02]

The blue bars compare the effect of just reordering the matrix—no significant improvements. The red bars compare the effect of combined reordering and register blocking. Each red bar is labeled by the block size chosen, and by the fill (in parentheses). We are currently looking at splitting the matrix to reduce this fill overhead while sustaining the performance improvement. In terms of storage cost (MB for double precision values and 32-bit integers): Size(natural, RCM, MMD, no blocking) ~= 35.5 MB Size(natural, 2x1) ~= 44.3 MB, or ~1.25x Size(RCM, 3x1) ~= 54.9 MB, or ~1.55x Size(MMD, 4x1) ~= 66.3 MB, or ~1.87x Let k = no. of non-zeros, f be the fill, and r x c be the block size. Also suppose sizeof(floating point data type) = sizeof(integer data type). Then the size of CSR storage is sizeof(A, ‘csr’) = 2k (ignoring row pointers) sizeof(A, ‘rxc’) = kf * (1+1/(rc)) So we expect a storage savings even with fill when sizeof(A, ‘rxc’) < sizeof(A, ‘csr’): kf * (1 + 1/(rc)) < 2k  f < 2 / (1 + (1/rc)) As rc  infinity, this condition becomes f < 2. (This case corresponds to the floating point values and integers being both 32-bit or both 64-bit, for example.) If sizeof(real) = 2*sizeof(int), then this condition becomes f < 1.5 as rc  infinity. (This case corresponds to storing the values in double precision, but storing the indices in 32-bit integers.) If sizeof(real) = 0.5*sizeof(int), then this condition becomes f < 3 as rc  infinity. (This case corresponds to storing the values in single precision, but the indices in 64-bit integers.)

MAPS Benchmark Example: Power4

MAPS Benchmark Example: Itanium 2

Saavedra-Barrera Example: Ultra 2i

I think we can mention the bounds without talking about them in detail I think we can mention the bounds without talking about them in detail. The main point is that this picture is a guide to what performance we get without blocking (reference) and what we might expect (bounds). Note that absolute performance is highly dependent on matrix structure.

Summary of Results: Pentium III Plot of the best block size (exhaustive search). Points: It helps for some matrices and not others, but not blocking is part of the implementation space so we never do worse. We’re close to the bounds. We should emphasize that the Intel compiler is generating the code here, so kudos to the Intel compiler group.

Summary of Results: Pentium III (3/3) And our heuristic always chooses a good block size (includes 1x1) in practice. Similar results for other platforms in the “Extra Slides” section.

Execution Time Breakdown (PAPI): Matrix 40

Preliminary Results (Matrix Set 1): Itanium 2 Dense FEM FEM (var) Assorted LP

Tuning Sparse Triangular Solve (SpTS) Compute x=L-1*b where L sparse lower triangular, x & b dense L from sparse LU has rich dense substructure Dense trailing triangle can account for 20—90% of matrix non-zeros SpTS optimizations Split into sparse trapezoid and dense trailing triangle Use tuned dense BLAS (DTRSV) on dense triangle Use Sparsity register blocking on sparse part Tuning parameters Size of dense trailing triangle Register block size

Sparse Kernels and Optimizations Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

Cache Blocked SpMV on LSI Matrix: Ultra 2i 10k x 255k 3.7M non-zeros Baseline: 16 Mflop/s Best block size & performance: 16k x 64k 28 Mflop/s Best block size on this platform: Optimal size is machine dependent.

Cache Blocking on LSI Matrix: Pentium 4 10k x 255k 3.7M non-zeros Baseline: 44 Mflop/s Best block size & performance: 16k x 16k 210 Mflop/s LSI matrix has nearly randomly distributed non-zeros.

Cache Blocked SpMV on LSI Matrix: Itanium 10k x 255k 3.7M non-zeros Baseline: 25 Mflop/s Best block size & performance: 16k x 32k 72 Mflop/s

Cache Blocked SpMV on LSI Matrix: Itanium 2 10k x 255k 3.7M non-zeros Baseline: 170 Mflop/s Best block size & performance: 16k x 65k 275 Mflop/s

Inter-Iteration Sparse Tiling (1/3) x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij x2 t2 y2 This generalization of Strout algorithm for an arbitrary Akx stolen from my quals talk. x3 t3 y3 x4 t4 y4 x5 t5 y5

Inter-Iteration Sparse Tiling (2/3) x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij Orange = everything needed to compute y1 Reuse a11, a12 x2 t2 y2 x3 t3 y3 x4 t4 y4 x5 t5 y5

Inter-Iteration Sparse Tiling (3/3) x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij Orange = everything needed to compute y1 Reuse a11, a12 Grey = y2, y3 Reuse a23, a33, a43 x2 t2 y2 Depending on cache placement policies, could reuse even more grey edges since they were read on previous iteration. x3 t3 y3 x4 t4 y4 x5 t5 y5

Inter-Iteration Sparse Tiling: Issues x1 t1 y1 Tile sizes (colored regions) grow with no. of iterations and increasing out-degree G likely to have a few nodes with high out-degree (e.g., Yahoo) Mathematical tricks to limit tile size? Judicious dropping of edges [Ng’01] x2 t2 y2 Idea of dropping edges follows from the following paper. Andrew Ng, Alice Zheng, Michael I. Jordan. “Link Analysis, Eigenvectors, and Stability.” IJCAI, 2001. http://citeseer.nj.nec.com/ng01link.html x3 t3 y3 x4 t4 y4 x5 t5 y5

Summary and Questions Need to understand matrix structure and machine BeBOP: suite of techniques to deal with different sparse structures and architectures Google matrix problem Established techniques within an iteration Ideas for inter-iteration optimizations Mathematical structure of problem may help Questions Structure of G? What are the computational bottlenecks? Enabling future computations? E.g., topic-sensitive PageRank  multiple vector version [Haveliwala ’02] See www.cs.berkeley.edu/~richie/bebop/intel/google for more info, including more complete Itanium 2 results.

Exploiting Matrix Structure Symmetry (numerical or structural) Reuse matrix entries Can combine with register blocking, multiple vectors, … Matrix splitting Split the matrix, e.g., into r x c and 1 x 1 No fill overhead Large matrices with random structure E.g., Latent Semantic Indexing (LSI) matrices Technique: cache blocking Store matrix as 2i x 2j sparse submatrices Effective when x vector is large Currently, search to find fastest size Symmetry could be numerical or structural. In matrix splitting, we try to overcome the issues with too much fill but splitting into “true” r x c blocks (i.e., without fill) and the remainder in conventional (CSR) storage.

Symmetric SpMV Performance: Pentium 4

SpMV with Split Matrices: Ultra 2i

Cache Blocking on Random Matrices: Itanium Speedup on four banded random matrices.

Sparse Kernels and Optimizations Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

Register Blocked SpMV: Pentium III

Register Blocked SpMV: Ultra 2i

Register Blocked SpMV: Power3

Register Blocked SpMV: Itanium

Possible Optimization Techniques Within an iteration, i.e., computing (G+uuT)*x once Cache block G*x On linear programming matrices and matrices with random structure (e.g., LSI), 1.5—4x speedups Best block size is matrix and machine dependent Reordering and/or splitting of G to separate dense structure (rows, columns, blocks) Between iterations, e.g., (G+uuT)2x (G+uuT)2x = G2x + (Gu)uTx + u(uTG)x + u(uTu)uTx Compute Gu, uTG, uTu once for all iterations G2x: Inter-iteration tiling to read G only once We have established techniques for efficiently computing within an iteration. What about computing across iterations?

Multiple Vector Performance: Itanium

Sparse Kernels and Optimizations Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

SpTS Performance: Itanium (See POHLL ’02 workshop paper, at ICS ’02.)

Sparse Kernels and Optimizations Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

Optimizing AAT*x Kernel: y=AAT*x, where A is sparse, x & y dense Arises in linear programming, computation of SVD Conventional implementation: compute z=AT*x, y=A*z Elements of A can be reused: When ak represent blocks of columns, can apply register blocking.

Optimized AAT*x Performance: Pentium III

Current Directions Applying new optimizations Other split data structures (variable block, diagonal, …) Matrix reordering to create block structure Structural symmetry New kernels (triple product RART, powers Ak, …) Tuning parameter selection Building an automatically tuned sparse matrix library Extending the Sparse BLAS Leverage existing sparse compilers as code generation infrastructure More thoughts on this topic tomorrow

Related Work Automatic performance tuning systems Code generation PHiPAC [Bilmes, et al., ’97], ATLAS [Whaley & Dongarra ’98] FFTW [Frigo & Johnson ’98], SPIRAL [Pueschel, et al., ’00], UHFFT [Mirkovic and Johnsson ’00] MPI collective operations [Vadhiyar & Dongarra ’01] Code generation FLAME [Gunnels & van de Geijn, ’01] Sparse compilers: [Bik ’99], Bernoulli [Pingali, et al., ’97] Generic programming: Blitz++ [Veldhuizen ’98], MTL [Siek & Lumsdaine ’98], GMCL [Czarnecki, et al. ’98], … Sparse performance modeling [Temam & Jalby ’92], [White & Saddayappan ’97], [Navarro, et al., ’96], [Heras, et al., ’99], [Fraguela, et al., ’99], …

More Related Work Compiler analysis, models Sparse BLAS interfaces CROPS [Carter, Ferrante, et al.]; Serial sparse tiling [Strout ’01] TUNE [Chatterjee, et al.] Iterative compilation [O’Boyle, et al., ’98] Broadway compiler [Guyer & Lin, ’99] [Brewer ’95], ADAPT [Voss ’00] Sparse BLAS interfaces BLAST Forum (Chapter 3) NIST Sparse BLAS [Remington & Pozo ’94]; SparseLib++ SPARSKIT [Saad ’94] Parallel Sparse BLAS [Fillipone, et al. ’96]

Context: Creating High-Performance Libraries Application performance dominated by a few computational kernels Today: Kernels hand-tuned by vendor or user Performance tuning challenges Performance is a complicated function of kernel, architecture, compiler, and workload Tedious and time-consuming Successful automated approaches Dense linear algebra: ATLAS/PHiPAC Signal processing: FFTW/SPIRAL/UHFFT

Cache Blocked SpMV on LSI Matrix: Itanium

Sustainable Memory Bandwidth

Multiple Vector Performance: Pentium 4

Multiple Vector Performance: Itanium

Multiple Vector Performance: Pentium 4

Optimized AAT*x Performance: Ultra 2i

Optimized AAT*x Performance: Pentium 4

Tuning Pays Off—PHiPAC

Tuning pays off – ATLAS Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)

Register Tile Sizes (Dense Matrix Multiply) 333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color-coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest A 2-D slice of the 3-D core matmul space, with all other generator options fixed. This shows the performance (Mflop/s) of various core matmul implementations on a small in-L2 cache workload. The “best” is the dark red square at (m0=2,k0=1,n0=8), which achieved 620 Mflop/s. This experiment was performed on the Sun Ultra-10 workstation (333 MHz Ultra II processor, 2 MB L2 cache) using the Sun cc compiler v5.0 with the flags -dalign -xtarget=native -xO5 -xarch=v8plusa. The space is discrete and highly irregular. (This is not even the worst example!)

High Precision GEMV (XBLAS)

High Precision Algorithms (XBLAS) Double-double (High precision word represented as pair of doubles) Many variations on these algorithms; we currently use Bailey’s Exploiting Extra-wide Registers Suppose s(1) , … , s(n) have f-bit fractions, SUM has F>f bit fraction Consider following algorithm for S = Si=1,n s(i) Sort so that |s(1)|  |s(2)|  …  |s(n)| SUM = 0, for i = 1 to n SUM = SUM + s(i), end for, sum = SUM Theorem (D., Hida) Suppose F<2f (less than double precision) If n  2F-f + 1, then error  1.5 ulps If n = 2F-f + 2, then error  22f-F ulps (can be  1) If n  2F-f + 3, then error can be arbitrary (S  0 but sum = 0 ) Examples s(i) double (f=53), SUM double extended (F=64) accurate if n  211 + 1 = 2049 Dot product of single precision x(i) and y(i) s(i) = x(i)*y(i) (f=2*24=48), SUM double extended (F=64)  accurate if n  216 + 1 = 65537

More Extra Slides

Opteron (1.4GHz, 2.8GFlop peak) Nonsymmetric peak = 504 MFlops Symmetric peak = 612 MFlops Beat ATLAS DGEMV’s 365 Mflops

Awards Best Paper, Intern. Conf. Parallel Processing, 2004 “Performance models for evaluation and automatic performance tuning of symmetric sparse matrix-vector multiply” Best Student Paper, Intern. Conf. Supercomputing, Workshop on Performance Optimization via High-Level Languages and Libraries, 2003 Best Student Presentation too, to Richard Vuduc “Automatic performance tuning and analysis of sparse triangular solve” Finalist, Best Student Paper, Supercomputing 2002 To Richard Vuduc “Performance Optimization and Bounds for Sparse Matrix-vector Multiply” Best Presentation Prize, MICRO-33: 3rd ACM Workshop on Feedback-Directed Dynamic Optimization, 2000 “Statistical Modeling of Feedback Data in an Automatic Tuning System”

Accuracy of the Tuning Heuristics (4/4)

Can Match DGEMV Performance

Fraction of Upper Bound Across Platforms

Achieved Performance and Machine Balance Machine balance [Callahan ’88; McCalpin ’95] Balance = Peak Flop Rate / Bandwidth (flops / double) Lower is better (I.e. can hope to get higher fraction of peak flop rate) Ideal balance for mat-vec: £ 2 flops / double For SpMV, even less

Where Does the Time Go? Most time assigned to memory Caches “disappear” when line sizes are equal Strictly increasing line sizes

Execution Time Breakdown: Matrix 40

Execution Time Breakdown (PAPI): Matrix 40

Speedups with Increasing Line Size

Summary: Performance Upper Bounds What is the best we can do for SpMV? Limits to low-level tuning of blocked implementations Refinements? What machines are good for SpMV? Partial answer: balance characterization Architectural consequences? Help to have strictly increasing line sizes

Evaluating algorithms and machines for SpMV Metrics Speedups Mflop/s (“fair” flops) Fraction of peak Questions Speedups are good, but what is “the best?” Independent of instruction scheduling, selection Can SpMV be further improved or not? What machines are “good” for SpMV?

Tuning Dense BLAS —PHiPAC

Tuning Dense BLAS– ATLAS Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)

Fast algorithms are rare. Consider Intel Pentium 4/1500: First circle: only 10% of codes run at >= 63% of peak Second circle: only 1% of codes run at >= 73% of peak Third circle: only .2% of codes run at >= 90% of peak

Statistical Models for Automatic Tuning Idea 1: Statistical criterion for stopping a search A general search model Generate implementation Measure performance Repeat Stop when probability of being within e of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models Problem: Choose 1 among m implementations at run-time Sample performance off-line, build statistical model

SpMV Historical Trends: Fraction of Peak

Motivation for Automatic Performance Tuning of SpMV Historical trends Sparse matrix-vector multiply (SpMV): 10% of peak or less Performance depends on machine, kernel, matrix Matrix known at run-time Best data structure + implementation can be surprising Our approach: empirical performance modeling and algorithm search Historical trends: we’ll look at the data that supports these claims shortly

Accuracy of the Tuning Heuristics (3/4)

Accuracy of the Tuning Heuristics (3/4) DGEMV

Sparse CALA Summary Optimizing one SpMV too limiting Take k steps of Krylov subspace method GMRES, CG, Lanczos, Arnoldi Assume matrix “well-partitioned,” with modest surface-to-volume ratio Parallel implementation Conventional: O(k log p) messages CALA: O(log p) messages - optimal Serial implementation Conventional: O(k) moves of data from slow to fast memory CALA: O(1) moves of data – optimal Can incorporate some preconditioners Need to be able to “compress” interactions between distant i, j Hierarchical, semiseparable matrices … Lots of speed up possible (measured and modeled) Price: some redundant computation

Potential Impact on Applications: T3P Application: accelerator design [Ko] 80% of time spent in SpMV Relevant optimization techniques Symmetric storage Register blocking On Single Processor Itanium 2 1.68x speedup 532 Mflops, or 15% of 3.6 GFlop peak 4.4x speedup with multiple (8) vectors 1380 Mflops, or 38% of peak