Download presentation
Presentation is loading. Please wait.
1
James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09
Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel
2
Berkeley Benchmarking and OPtimization (BeBOP)
Prof. Katherine Yelick Current members Kaushik Datta, Ozan Demirlioglu, Mark Hoemmen, Shoaib Kamil, Rajesh Nishtala, Vasily Volkov, Sam Williams, … Previous members Hormozd Gahvari, Eun-Jim Im, Ankit Jain, Rich Vuduc, many undergrads Many results here from current, previous students bebop.cs.berkeley.edu
3
Outline Motivation for Automatic Performance Tuning
Results for sparse matrix kernels OSKI = Optimized Sparse Kernel Interface Tuning Higher Level Algorithms Future Work, Class Projects Future Related Lecture Sam Williams on tuning SpMV for multicore, other emerging architectures
4
Motivation for Automatic Performance Tuning
Writing high performance software is hard Make programming easier while getting high speed Ideal: program in your favorite high level language (Matlab, Python, PETSc…) and get a high fraction of peak performance Reality: Best algorithm (and its implementation) can depend strongly on the problem, computer architecture, compiler,… Best choice can depend on knowing a lot of applied mathematics and computer science How much of this can we teach? How much of this can we automate?
5
Examples of Automatic Performance Tuning (1)
Dense BLAS Sequential PHiPAC (UCB), then ATLAS (UTK) (used in Matlab) math-atlas.sourceforge.net/ Fast Fourier Transform (FFT) & variations Sequential and Parallel FFTW (MIT) Digital Signal Processing SPIRAL: (CMU) Communication Collectives (UCB, UTK) Rose (LLNL), Bernoulli (Cornell), Telescoping Languages (Rice), … More projects, conferences, government reports, …
6
Examples of Automatic Performance Tuning (2)
What do dense BLAS, FFTs, signal processing, MPI reductions have in common? Can do the tuning off-line: once per architecture, algorithm Can take as much time as necessary (hours, a week…) At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc.
7
Tuning Register Tile Sizes (Dense Matrix Multiply)
333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color-coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest A 2-D slice of the 3-D core matmul space, with all other generator options fixed. This shows the performance (Mflop/s) of various core matmul implementations on a small in-L2 cache workload. The “best” is the dark red square at (m0=2,k0=1,n0=8), which achieved 620 Mflop/s. This experiment was performed on the Sun Ultra-10 workstation (333 MHz Ultra II processor, 2 MB L2 cache) using the Sun cc compiler v5.0 with the flags -dalign -xtarget=native -xO5 -xarch=v8plusa. The space is discrete and highly irregular. (This is not even the worst example!) Needle in a haystack
8
Example: Select a Matmul Implementation
9
Example: Support Vector Classification
Paper: Statistical Models for Empirical Search-Based Performance Tuning (International Journal of High Performance Computing Applications, 18 (1), pp , February 2004) Richard Vuduc, James W. Demmel, and Jeff A. Bilmes.
10
Examples of Automatic Performance Tuning (3)
What do dense BLAS, FFTs, signal processing, MPI reductions have in common? Can do the tuning off-line: once per architecture, algorithm Can take as much time as necessary (hours, a week…) At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc. Can’t always do off-line tuning Algorithm and implementation may strongly depend on data only known at run-time Ex: Sparse matrix nonzero pattern determines both best data structure and implementation of Sparse-matrix-vector-multiplication (SpMV) Part of search for best algorithm just be done (very quickly!) at run-time
11
Source: Accelerator Cavity Design Problem (Ko via Husbands)
12
Linear Programming Matrix
First 20,000 columns of 4k x 1M matrix, rail4284 …
13
A Sparse Matrix You Encounter Every Day
You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.
14
SpMV with Compressed Sparse Row (CSR) Storage
A one-slide crash course on a basic sparse matrix data structure (compressed sparse row, or CSR, format) and a basic kernel: matrix-vector multiply. In CSR: store each non-zero value, but also need extra index for each non-zero less than dense, but higher “rate” of storage per non-zero. Sparse-matrix-vector multiply (SpMV): no re-use of A, just of vectors x and y, taken to be dense. SpMV with CSR storage: indirect/irregular accesses to x! So what you’d like to do is: Unroll the k loop but you need to know how many non-zeors per row Hoist y[i] not a problem, absent aliasing Eliminate ind[i] it’s an extra load, but you need to know the non-zero pattern Reuse elements of x but you need to know the non-zero pattern Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]]
15
Example: The Difficulty of Tuning
nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem FEM discretization from a structural analysis problem. Source: Matrix “raefsky”, provided by NASA via the University of Florida Sparse Matrix Collection
16
Example: The Difficulty of Tuning
nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem discretization from a structural analysis problem. Source: Matrix “raefsky”, provided by NASA via the University of Florida Sparse Matrix Collection 8x8 dense substructure
17
Taking advantage of block structure in SpMV
Bottleneck is time to get matrix from memory Only 2 flops for each nonzero in matrix Don’t store each nonzero with index, instead store each nonzero r-by-c block with index Storage drops by up to 2x, if rc >> 1, all 32-bit quantities Time to fetch matrix from memory decreases Change both data structure and algorithm Need to pick r and c Need to change algorithm accordingly In example, is r=c=8 best choice? Minimizes storage, so looks like a good idea…
18
Speedups on Itanium 2: The Need for Search
Mflop/s Best: 4x2 [NOTE: This slide has some animation in it.] Experiment: Try all block sizes that divide 8x8—16 implementations in all. Platform: 900 MHz Itanium-2, 3.6 Gflop/s peak speed. Intel v7.0 compiler. Good speedups (4x) but at an unexpected block size (4x2). Figure taken from Im, Yelick, Vuduc, IJHPCA paper, to appear. Reference Mflop/s
19
Register Profile: Itanium 2
1190 Mflop/s 190 Mflop/s
20
SpMV Performance (Matrix #2): Generation 2
Ultra 2i - 9% 63 Mflop/s Ultra 3 - 5% 109 Mflop/s SpMV Performance (Matrix #2): Generation 2 35 Mflop/s 53 Mflop/s Pentium III - 19% Pentium III-M - 15% 96 Mflop/s 120 Mflop/s 42 Mflop/s 58 Mflop/s
21
Register Profiles: Sun and Intel x86
Ultra 2i - 11% 72 Mflop/s Ultra 3 - 5% 90 Mflop/s Register Profiles: Sun and Intel x86 35 Mflop/s 50 Mflop/s Pentium III - 21% Pentium III-M - 15% 108 Mflop/s 122 Mflop/s 42 Mflop/s 58 Mflop/s
22
SpMV Performance (Matrix #2): Generation 1
Power3 - 13% 195 Mflop/s Power4 - 14% 703 Mflop/s SpMV Performance (Matrix #2): Generation 1 100 Mflop/s 469 Mflop/s Itanium 1 - 7% Itanium % 225 Mflop/s 1.1 Gflop/s 103 Mflop/s 276 Mflop/s
23
Register Profiles: IBM and Intel IA-64
Power3 - 17% 252 Mflop/s Power4 - 16% 820 Mflop/s Register Profiles: IBM and Intel IA-64 122 Mflop/s 459 Mflop/s Itanium 1 - 8% Itanium % 247 Mflop/s 1.2 Gflop/s 107 Mflop/s 190 Mflop/s
24
Another example of tuning challenges
More complicated non-zero structure in general N = 16614 NNZ = 1.1M An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection
25
Zoom in to top corner More complicated non-zero structure in general
NNZ = 1.1M An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection
26
3x3 blocks look natural, but…
More complicated non-zero structure in general Example: 3x3 blocking Logical grid of 3x3 cells But would lead to lots of “fill-in” Suppose we know we want to use 3x3 blocking. SPARSITY lays a logical 3x3 grid on the matrix…
27
Extra Work Can Improve Efficiency!
More complicated non-zero structure in general Example: 3x3 blocking Logical grid of 3x3 cells Fill-in explicit zeros Unroll 3x3 block multiplies “Fill ratio” = 1.5 On Pentium III: 1.5x speedup! Actual mflop rate 1.52 = 2.25 higher The main point is that there can be a considerable pay-off for judicious choice of “fill” (r x c), but that allowing for fill makes the implementation space even more complicated. For this matrix on a Pentium III, we observed a 1.5x speedup, even after filling in an additional 50% explicit zeros. Two effects: Filling in zeros, but eliminating integer indices (overhead) (2) Quality of r x c code produced by compiler may be much better for particular r’s and c’s. In this particular example, overall data structure size stays the same, but 3x3 code is 2x faster than 1x1 code for a dense matrix stored in sparse format.
28
Automatic Register Block Size Selection
Selecting the r x c block size Off-line benchmark Precompute Mflops(r,c) using dense A for each r x c Once per machine/architecture Run-time “search” Sample A to estimate Fill(r,c) for each r x c Run-time heuristic model Choose r, c to minimize time ~ Fill(r,c) / Mflops(r,c) Using this approach, we do choose the right block sizes for the two previous examples.
29
Accurate and Efficient Adaptive Fill Estimation
Idea: Sample matrix Fraction of matrix to sample: s Î [0,1] Cost ~ O(s * nnz) Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals Idea: Monitor variance Cost of tuning Lower bound: convert matrix in 5 to 40 unblocked SpMVs Heuristic: 1 to 11 SpMVs
30
Accuracy of the Tuning Heuristics (1/4)
See p. 375 of Vuduc’s thesis for matrices NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)
31
Accuracy of the Tuning Heuristics (2/4)
32
Accuracy of the Tuning Heuristics (2/4)
DGEMV
33
Upper Bounds on Performance for blocked SpMV
P = (flops) / (time) Flops = 2 * nnz(A) Lower bound on time: Two main assumptions 1. Count memory ops only (streaming) 2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge minimum access “latency” ai at Li cache & amem e.g., Saavedra-Barrera and PMaC MAPS benchmarks
34
Example: L2 Misses on Itanium 2
Upper bound: all source vector accesses miss Lower bound: all source vector accesses hit after first Misses measured using PAPI [Browne ’00]
35
Example: Bounds on Itanium 2
36
Example: Bounds on Itanium 2
37
Example: Bounds on Itanium 2
38
Summary of Other Performance Optimizations
Optimizations for SpMV Register blocking (RB): up to 4x over CSR Variable block splitting: 2.1x over CSR, 1.8x over RB Diagonals: 2x over CSR Reordering to create dense structure + splitting: 2x over CSR Symmetry: 2.8x over CSR, 2.6x over RB Cache blocking: 2.8x over CSR Multiple vectors (SpMM): 7x over CSR And combinations… Sparse triangular solve Hybrid sparse/dense data structure: 1.8x over CSR Higher-level kernels A·AT·x, AT·A·x: 4x over CSR, 1.8x over RB A2·x: 2x over CSR, 1.5x over RB [A·x, A2·x, A3·x, .. , Ak·x] Items in bold: see Vuduc’s 455 page dissertation Other items: see collaborations with BeBOPpers 2.1x for variable block splitting is just for the FEM matrices with 2 block sizes
39
Example: Sparse Triangular Factor
Raefsky4 (structural problem) + SuperLU + colmmd N=19779, nnz=12.6 M Dense trailing triangle: dim=2268, 20% of total nz Can be as high as 90+%! 1.8x over CSR
40
Cache Optimizations for AAT*x
Cache-level: Interleave multiplication by A, AT Only fetch A from memory once dot product … … “axpy” Register-level: aiT to be r´c block row, or diag row
41
Example: Combining Optimizations (1/2)
Register blocking, symmetry, multiple (k) vectors Three low-level tuning parameters: r, c, v X k v * r c += Y A
42
Example: Combining Optimizations (2/2)
Register blocking, symmetry, and multiple vectors [Ben UCB] Symmetric, blocked, 1 vector Up to 2.6x over nonsymmetric, blocked, 1 vector Symmetric, blocked, k vectors Up to 2.1x over nonsymmetric, blocked, k vecs. Up to 7.3x over nonsymmetric, nonblocked, 1, vector Symmetric Storage: up to 64.7% savings
43
Potential Impact on Applications: Omega3P
Application: accelerator cavity design [Ko] Relevant optimization techniques Symmetric storage Register blocking Reordering, to create more dense blocks Reverse Cuthill-McKee ordering to reduce bandwidth Do Breadth-First-Search, number nodes in reverse order visited Traveling Salesman Problem-based ordering to create blocks Nodes = columns of A Weights(u, v) = no. of nz u, v have in common Tour = ordering of columns Choose maximum weight tour See [Pinar & Heath ’97] 2.1x speedup on Power 4 (but SPMV not dominant in app)
44
Source: Accelerator Cavity Design Problem (Ko via Husbands)
45
Post-RCM Reordering
46
100x100 Submatrix Along Diagonal
47
“Microscopic” Effect of RCM Reordering
Before: Green + Red After: Green + Blue
48
“Microscopic” Effect of Combined RCM+TSP Reordering
Before: Green + Red After: Green + Blue
49
(Omega3P)
50
Optimized Sparse Kernel Interface - OSKI
Provides sparse kernels automatically tuned for user’s matrix & machine BLAS-style functionality: SpMV, Ax & ATy, TrSV Hides complexity of run-time tuning Includes new, faster locality-aware kernels: ATAx, Akx Faster than standard implementations Up to 4x faster matvec, 1.8x trisolve, 4x ATA*x For “advanced” users & solver library writers Available as stand-alone library (OSKI 1.0.1h, 6/07) Available as PETSc extension (OSKI-PETSc .1d, 3/06) Bebop.cs.berkeley.edu/oski Library interface defines low-level primitives in the style of the Sparse BLAS: sparse matrix-vector multiply, sparse triangular solve. Matrix-vector multiply touches each matrix element only once, whereas our locality-aware kernels can reuse these elements. The BeBOP library includes these kernels: (1) simultaneous computation of A*x, AT*z (2) AT*A*x (3) Ak*x, k is non-negative integer Unlike tuning in the dense case, sparse tuning must occur at run-time since the matrix is unknown until then. Here, the “standard implementation” stores the matrix in compressed sparse row (CSR) format with the kernel coded in C or Fortran compiled with full optimizations. To maximize the impact of our software, we are implementing a new “automatically tuned” matrix type in PETSc. Most PETSc users will be able to use our library with little or no modifications to their source code. The stand-alone version of our library is C and Fortran-callable. “Advanced users” means users willing to program at the level of the BLAS.
51
How the OSKI Tunes (Overview)
Library Install-Time (offline) Application Run-Time 1. Build for Target Arch. 2. Benchmark Workload from program monitoring History Matrix Diagram key: Ovals == actions taken by library Cylinders == data stored by or with the library Solid arrows == control flow Dashed arrows == “data” flow :: At library-installation time :: 1. “Build”: Pre-compile source code. Possible code variants are stored in dynamic libraries (“Code Variants”). 2. “Benchmark”: The installation process measures the speed of the possible code variants and stores the results (“Benchmark Data”) The entire build process uses standard & portable GNU configure. :: At run-time, from within the user’s application :: 1. “Matrix from user”: The user passes her pre-assembled matrix in a standard format like compressed sparse row (CSR) or column (CSC), and the library 2. “Evaluate models”: The library contains a list of “heuristic models.” Each model is actually a procedure that analyzes the matrix, workload, and benchmarking data and chooses a data structure & code it thinks is the best for that matrix and workload. A model is typically specialized to predict tuning parameters for a particular kernel & class of data structures (e.g., predict the block size for register blocked matvec). However, higher-level models (meta-models) that combine several heuristics or predict over several possible data structures and kernels are also possible. In the initial implementation, “Evaluate Models” does the following: * Based on the workload, decide on an allowable amount of time for tuning (a “tuning budget”) * WHILE there is time left for tuning DO - Select and evaluate a model to get best predicted performance & corresponding tuning parameters Generated code variants Benchmark data 1. Evaluate Models Heuristic models 2. Select Data Struct. & Code To user: Matrix handle for kernel calls Extensibility: Advanced users may write & dynamically add “Code variants” and “Heuristic models” to system.
52
How the OSKI Tunes (Overview)
At library build/install-time Pre-generate and compile code variants into dynamic libraries Collect benchmark data Measures and records speed of possible sparse data structure and code variants on target architecture Installation process uses standard, portable GNU AutoTools At run-time Library “tunes” using heuristic models Models analyze user’s matrix & benchmark data to choose optimized data structure and code Non-trivial tuning cost: up to ~40 mat-vecs Library limits the time it spends tuning based on estimated workload provided by user or inferred by library User may reduce cost by saving tuning results for application on future runs with same or similar matrix The library build & installation process automatically benchmarks the target machine to determine how fast different kinds of data structures and kernel implementations can potentially run. These variants are pre-determined and generated using specialized code generators. Run-time tuning has a non-trivial cost, and the amount of time the library allows itself for tuning depends on the expected workload (e.g., how frequently matrix-vector multiply will be called). This workload may be specified by the user, or may be inferred by the library which transparently monitors how often different kernels are called (self-profiling).
53
Optimizations in OSKI, so far
Fully automatic heuristics for Sparse matrix-vector multiply Register-level blocking Register-level blocking + symmetry + multiple vectors Cache-level blocking Sparse triangular solve with register-level blocking and “switch-to-dense” optimization Sparse ATA*x with register-level blocking User may select other optimizations manually Diagonal storage optimizations, reordering, splitting; tiled matrix powers kernel (Ak*x) All available in dynamic libraries Accessible via high-level embedded script language “Plug-in” extensibility Very advanced users may write their own heuristics, create new data structures/code variants and dynamically add them to the system The automatic heuristics have been discussed in published papers available on the BeBOP home page. The BeBOP group has studied a number of other optimizations. Code for these optimizations will be compiled and available to the user. (See “BeBOP-Lua” in our design document.)
54
How to Call OSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); OSKI is designed to allow users to migrate their applications gradually to use OSKI. The code fragment shown above represents a sample C application which already has a CSR matrix implementation (ptr, ind, val) and dense vectors (x, y), initialized in some way. The application computes matrix-vector multiply 500 times in a loop. Indeed, OSKI does not provide a way to construct/assemble a matrix, so a user always has to create her matrix in one of OSKI’s supported base formats (CSR, CSC) first.
55
How to Call OSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); To use OSKI, the user should first provide OSKI with the semantics of her existing data structures, here shown by three calls for the three data structures (the matrix + the two vectors). OSKI returns handles to these objects, and in this example, will “share” these objects with the user. The key point here is that if the user only introduced this change, the application would execute just as it did before: OSKI does not modify the user’s data structures unless the user asks it to via OSKI’s “set non-zero value” routines. The call to “oski_CreateMatCSR” defines the semantics of this matrix, e.g., it’s logical dimensions. The user specifies other semantic information (e.g., 0-based indices vs. 1-based indices, symmetry with half or full storage, Hermitian-ness, lower/upper triangular shape, implicit unit diagonal) where the “...” are shown. The calls to “oski_CreateVecView” creates a wrapper around the user’s data structure. Vectors may have non-unit strides, for example. Moreover, there is an “oski_CreateMultiVecView” operation which will wrap a dense matrix data structure. In that case, the user can specify whether she has used row vs. column major storage and what the stride is. Recursive dense data structures are not supported at this time.
56
How to Call OSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);/* Step 2 */ Of course, if you’re using OSKI at all, you probably want to call OSKI kernels. Here, the user has replaced her matrix-vector multiply routine with the corresponding OSKI call. Notice that OSKI kernel function signatures mimic the BLAS. Recall that the user must explicitly ask OSKI to tune, so in this example, no tuning occurs. OSKI has the semantics of the user’s matrix data structure (ptr, ind, val) and vector data structures (x, y), and so can perform the SpMV accordingly. Question: Do users want other base formats besides CSR, CSC? E.g., JAD for legacy apps written for vector architectures?
57
How to Call OSKI: Tune with Explicit Hints
User calls “tune” routine May provide explicit tuning hints (OPTIONAL) oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ /* Tell OSKI we will call SpMV 500 times (workload hint) */ oski_SetHintMatMult(A_tunable, OP_NORMAL, , x_view, , y_view, 500); /* Tell OSKI we think the matrix has 8x8 blocks (structural hint) */ oski_SetHint(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); This slide shows one style of tuning in which the user provides explicit hints. These hints are optional. The call to “oski_TuneMat” marks the point in program execution at which the application might incur the 40 SpMV tuning cost. You can substitute symbolic vectors (defined by library constants) instead of actual vectors (x_view, y_view).
58
How the User Calls OSKI: Implicit Tuning
Ask library to infer workload Library profiles all kernel calls May periodically re-tune What if you don’t have any hints, or don’t want to provide any? It’s OK to call “tune” repeatedly, as done in this example. Each call to “tune”, OSKI will check to see whether there is heavy use of particular kernels, in which case it might decide to tune. In cases when it does not, the call is essentially a no-op. This mode also provides a way to re-tune periodically, though our current implementation does not do that. oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ for( i = 0; i < 500; i++ ) { oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ }
59
Quick-and-dirty Parallelism: OSKI-PETSc
Extend PETSc’s distributed memory SpMV (MATMPIAIJ) PETSc Each process stores diag (all-local) and off-diag submatrices OSKI-PETSc: Add OSKI wrappers Each submatrix tuned independently p0 p1 p2 p3
60
OSKI-PETSc Proof-of-Concept Results
Matrix 1: Accelerator cavity design (R. SLAC) N ~ 1 M, ~40 M non-zeros 2x2 dense block substructure Symmetric Matrix 2: Linear programming (Italian Railways) Short-and-fat: 4k x 1M, ~11M non-zeros Highly unstructured Big speedup from cache-blocking: no native PETSc format Evaluation machine: Xeon cluster Peak: 4.8 Gflop/s per node Proof-of-concept experiment to show OSKI-PETSc implementation “works” (I.e., doesn’t change scaling behavior you’d expect from your current PETSc code)
61
Accelerator Cavity Matrix
Roughly 90% of non-zeros in the blocks along the diagonal
62
OSKI-PETSc Performance: Accel. Cavity
p=1: 234 Mflop/s p=1: 315 Mflop/s OSKI-PETSc p=1: 480 Mflop/s OSKI-PETSc p=8: 6.2
63
Linear Programming Matrix
First 20,000 columns… …
64
OSKI-PETSc Performance: LP Matrix
Note the single-process improvement from cache blocking: 105 Mflop/s vs. 260 Mflop/s (2.48x speedup) Max perf of any implementation p=8: 315 Mflop/s Poor scaling due to significant off-process communication required.
65
Tuning Higher Level Algorithms than SpMV
We almost always do many SpMVs, not just one “Krylov Subspace Methods” (KSMs) for Ax=b, Ax = λx Conjugate Gradients, GMRES, Lanczos, … Do a sequence of k SpMVs to get vectors [x1 , … , xk] Find best solution x as linear combination of [x1 , … , xk] Main cost is k SpMVs Since communication usually dominates, can we do better? Goal: make communication cost independent of k Parallel case: O(log P) messages, not O(k log P) - optimal same bandwidth as before Sequential case: O(1) messages and bandwidth, not O(k) - optimal Achievable when matrix partitionable with low surface-to-volume ratio
66
Locally Dependent Entries for [x,Ax], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
67
Locally Dependent Entries for [x,Ax,A2x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
68
Locally Dependent Entries for [x,Ax,…,A3x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
69
Locally Dependent Entries for [x,Ax,…,A4x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
70
Locally Dependent Entries for [x,Ax,…,A8x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication k=8 fold reuse of A
71
Remotely Dependent Entries for [x,Ax,…,A8x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x One message to get data needed to compute remotely dependent entries, not k=8 Minimizes number of messages = latency cost Price: redundant work “surface/volume ratio”
72
Fewer Remotely Dependent Entries for [x,Ax,…,A8x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Reduce redundant work by half
73
Remotely Dependent Entries for [x,Ax, A2x,A3x], 2D Laplacian
74
Remotely Dependent Entries for [x,Ax,A2x,A3x],
A irregular, multiple processors Red needed for k=1 Green also needed for k=2 Blue also needed for k=3
75
Sequential [x,Ax,…,A4x], with memory hierarchy
One read of matrix from slow memory, not k=4 Minimizes words moved = bandwidth cost No redundant work
76
Performance Results Measured Multicore (Clovertown) speedups up to 6.4x Measured/Modeled sequential OOC speedup up to 3x Modeled parallel Petascale speedup up to 6.9x Modeled parallel Grid speedup up to 22x Sequential speedup due to bandwidth, works for many problem sizes Parallel speedup due to latency, works for smaller problems on many processors
77
Speedups on Intel Clovertown (8 core)
78
Optimizing Communication Complexity of Sparse Solvers
Need to modify high level algorithms to use new kernel Example: GMRES for Ax=b where A= 2D Laplacian x lives on n-by-n mesh Partitioned on p½ -by- p½ processor grid A has “5 point stencil” (Laplacian) (Ax)(i,j) = linear_combination(x(i,j), x(i,j±1), x(i±1,j)) Ex: 18-by-18 mesh on 3-by-3 processor grid
79
Minimizing Communication
What is the cost = (#flops, #words, #mess) of s steps of standard GMRES? GMRES, ver.1: for i=1 to s w = A * v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H n/p½ n/p½ Cost(A * v) = s * (9n2 /p, 4n / p½ , 4 ) Cost(MGS = Modified Gram-Schmidt) = s2/2 * ( 4n2 /p , log p , log p ) Total cost ~ Cost( A * v ) + Cost (MGS) Can we reduce the latency?
80
Minimizing Communication
Cost(GMRES, ver.1) = Cost(A*v) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 4s ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from A*v can you avoid? Almost all GMRES, ver. 2: W = [ v, Av, A2v, … , Asv ] [Q,R] = MGS(W) Build H from R, solve LSQ problem s = 3 Cost(W) = ( ~ same, ~ same , 8 ) Latency cost independent of s – optimal Cost (MGS) unchanged Can we reduce the latency more?
81
Minimizing Communication
Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all Cost(TSQR) = ( ~ same, ~ same , log p ) Latency cost independent of s - optimal GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234
82
Minimizing Communication
Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234 Cost(TSQR) = ( ~ same, ~ same , log p ) Oops
83
Minimizing Communication
Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234 Cost(TSQR) = ( ~ same, ~ same , log p ) Oops – W from power method, precision lost!
85
Minimizing Communication
Cost(GMRES, ver. 3) = Cost(W) + Cost(TSQR) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , log p ) Latency cost independent of s, just log p – optimal Oops – W from power method, so precision lost – What to do? Use a different polynomial basis Not Monomial basis W = [v, Av, A2v, …], instead … Newton Basis WN = [v, (A – θ1 I)v , (A – θ2 I)(A – θ1 I)v, …] or Chebyshev Basis WC = [v, T1(v), T2(v), …]
87
Extensions Other Krylov methods Preconditioning
Arnoldi, CG, Lanczos, … Preconditioning Solve MAx=Mb where preconditioning matrix M chosen to make system “easier” M approximates A-1 somehow, but cheaply, to accelerate convergence Cheap as long as contributions from “distant” parts of the system can be compressed Sparsity Low rank
88
Design Space for [x,Ax,…,Akx] (1/3)
Mathematical Operation Keep all vectors Krylov Subspace Methods Improving conditioning of basis W = [x, p1(A)x, p2(A)x,…,pk(A)x] pi(A) = degree i polynomial chosen to reduce cond(W) Preconditioning (Ay=b MAy=Mb) [x,Ax,MAx,AMAx,MAMAx,…,(MA)kx] Keep last vector Akx only Jacobi, Gauss Seidel
89
Design Space for [x,Ax,…,Akx] (2/3)
Representation of sparse A Zero pattern may be explicit or implicit Nonzero entries may be explicit or implicit Implicit save memory, communication Explicit pattern Implicit pattern Explicit nonzeros General sparse matrix Image segmentation Implicit nonzeros Laplacian(graph), for graph partitioning “Stencil matrix” Ex: tridiag(-1,2,-1) Representation of dense preconditioners M Low rank off-diagonal blocks (semiseparable)
90
Design Space for [x,Ax,…,Akx] (3/3)
Parallel implementation From simple indexing, with redundant flops surface/volume ratio To complicated indexing, with fewer redundant flops Sequential implementation Depends on whether vectors fit in fast memory Reordering rows, columns of A Important in parallel and sequential cases Can be reduced to pair of Traveling Salesmen Problems Plus all the optimizations for one SpMV!
91
Summary Communication-Avoiding Linear Algebra (CALA)
Lots of related work Some going back to 1960’s Reports discuss this comprehensively, not here Our contributions Several new algorithms, improvements on old ones Unifying parallel and sequential approaches to avoiding communication Time for these algorithms has come, because of growing communication costs Why avoid communication just for linear algebra motifs?
92
Possible Class Projects
Come to BEBOP meetings (T 9 – 10:30, 606 Soda) Incorporate multicore optimizations into OSKI Experiment with SpMV on GPU Which optimizations are most effective? Try to speed up particular matrices of interest Data mining Experiment with new [x,Ax,…,Akx] kernel GPU, multicore, distributed memory On matrices of interest Experiment with new solvers using this kernel
93
Extra Slides
94
Tuning Higher Level Algorithms
So far we have tuned a single sparse matrix kernel y = AT*A*x motivated by higher level algorithm (SVD) What can we do by extending tuning to a higher level? Consider Krylov subspace methods for Ax=b, Ax = lx Conjugate Gradients (CG), GMRES, Lanczos, … Inner loop does y=A*x, dot products, saxpys, scalar ops Inner loop costs at least O(1) messages k iterations cost at least O(k) messages Our goal: show how to do k iterations with O(1) messages Possible payoff – make Krylov subspace methods much faster on machines with slow networks Memory bandwidth improvements too (not discussed) Obstacles: numerical stability, preconditioning, …
95
Krylov Subspace Methods for Solving Ax=b
Compute a basis for a subspace V by doing y = A*x k times Find “best” solution in that Krylov subspace V Given starting vector x1, V spanned by x2 = A*x1, x3 = A*x2 , … , xk = A*xk-1 GMRES finds an orthogonal basis of V by Gram-Schmidt, so it actually does a different set of SpMVs than in last bullet
96
Example: Standard GMRES
r = b - A*x1, b = length(r), v1 = r / b … length(r) = sqrt(S ri2 ) for m= 1 to k do w = A*vm … at least O(1) messages for i = 1 to m do … Gram-Schmidt him = dotproduct(vi , w ) … at least O(1) messages, or log(p) w = w – h im * vi end for hm+1,m = length(w) … at least O(1) messages, or log(p) vm+1 = w / hm+1,m find y minimizing length( Hk * y – be1 ) … small, local problem new x = x1 + Vk * y … Vk = [v1 , v2 , … , vk ] Dot product and length actually need O(log p) messages, so O(k^2 * log p) messages, CG would be O(k * log p) O(k2), or O(k2 log p), messages altogether
97
Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal
Different basis for same Krylov subspace What can Proc 1 compute without communication? Proc 1 Proc 2 (A8x)(1:30): (A2x)(1:30): (Ax)(1:30): x(1:30):
98
Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal
Computing missing entries with 1 communication, redundant work Proc 1 Proc 2 (A8x)(1:30): (A2x)(1:30): (Ax)(1:30): x(1:30):
99
Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal
Saving half the redundant work Proc 1 Proc 2 (A8x)(1:30): (A2x)(1:30): (Ax)(1:30): x(1:30):
100
Example: Computing [Ax,A2x,A3x,…,Akx] for Laplacian
A = 5pt Laplacian in 2D, Communicated point for k=3 shown
101
Latency-Avoiding GMRES (1)
r = b - A*x1, b = length(r), v1 = r / b … O(log p) messages Wk+1 = [v1 , A * v1 , A2 * v1 , … , Ak * v1 ] … O(1) messages [Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages Hk = R(:, 2:k+1) * (R(1:k,1:k))-1 … small, local problem find y minimizing length( Hk * y – be1 ) … small, local problem new x = x1 + Qk * y … local problem O(log p) messages altogether Independent of k
102
Latency-Avoiding GMRES (2)
[Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages Easy, but not so stable way to do it: X(myproc) = Wk+1T(myproc) * Wk+1 (myproc) … local computation Y = sum_reduction(X(myproc)) … O(log p) messages … Y = Wk+1T* Wk+1 R = (cholesky(Y))T … small, local computation Q(myproc) = Wk+1 (myproc) * R-1 … local computation
103
Numerical example (1) Diagonal matrix with n=1000, Aii from 1 down to 10-5 Instability as k grows, after many iterations By using higher precision, the instabilities start later and later.
104
Partial remedy: restarting periodically (every 120 iterations)
Numerical Example (2) Partial remedy: restarting periodically (every 120 iterations) Other remedies: high precision, different basis than [v , A * v , … , Ak * v ]
105
Operation Counts for [Ax,A2x,A3x,…,Akx] on p procs
Problem Per-proc cost Standard Optimized 1D mesh #messages 2k 2 (tridiagonal) #words sent #flops 5kn/p 5kn/p + 5k2 memory (k+4)n/p (k+4)n/p + 8k Assume problem large so k << n/p in 1D case, k << n/p1/3 in 3D case 3D mesh #messages 26k 26 27 pt stencil #words sent 6kn2p-2/ knp-1/3 + O(k) 6kn2p-2/ k2np-1/ O(k3) #flops 53kn3/p 53kn3/p + O(k2n2p-2/3) memory (k+28)n3/p+ 6n2p-2/3 + O(np-1/3) (k+28)n3/p + 168kn2p-2/3 + O(k2np-1/3)
106
Summary and Future Work
Dense LAPACK ScaLAPACK Communication primitives Sparse Kernels, Stencils Higher level algorithms All of the above on new architectures Vector, SMPs, multicore, Cell, … High level support for tuning Specification language Integration into compilers
107
Extra Slides
108
A Sparse Matrix You Encounter Every Day
Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your inbox.) You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.
109
What about the Google Matrix?
Google approach Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix At query-time: return list of pages ordered by rank Matrix: A = aG + (1-a)(1/n)uuT Markov model: Surfer follows link with probability a, jumps to a random page with probability 1-a G is n x n connectivity matrix [n » billions] gij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) u is a vector of all 1 values Steady-state probability xi of landing on page i is solution to x = Ax Approximate x by power method: x = Akx0 In practice, k » 25 The matrix to which Google applies the power method is the sum of the connectivity graph matrix G plus a rank one update. According to Cleve’s corner, alpha ~= 85. Macroscopic connectivity structure: Kumar, et al. “The web as a graph.” PODS Size of G as of May ’02: Cleve’s corner. “The world’s largest sparse matrix computation.” “k = 25”: Taher Stanford (student of Jeff Ullman), private communication: “For 120M pages, [the computation takes] about 5 hours for 25 iterations, no parallelization. The limiting factor is the disk speed for sequentially reading in the 8 GB matrix (it is assumed to be larger than main memory).” [13 Mar ’02]
110
Current Work Public software release
Impact on library designs: Sparse BLAS, Trilinos, PETSc, … Integration in large-scale applications DOE: Accelerator design; plasma physics Geophysical simulation based on Block Lanczos (ATA*X; LBL) Systematic heuristics for data structure selection? Evaluation of emerging architectures Revisiting vector micros Other sparse kernels Matrix triple products, Ak*x Parallelism Sparse benchmarks (with UTK) [Gahvari & Hoemmen] Automatic tuning of MPI collective ops [Nishtala, et al.]
111
Summary of High-Level Themes
“Kernel-centric” optimization Vs. basic block, trace, path optimization, for instance Aggressive use of domain-specific knowledge Performance bounds modeling Evaluating software quality Architectural characterizations and consequences Empirical search Hybrid on-line/run-time models Statistical performance models Exploit information from sampling, measuring
112
Related Work My bibliography: 337 entries so far
Sample area 1: Code generation Generative & generic programming Sparse compilers Domain-specific generators Sample area 2: Empirical search-based tuning Kernel-centric linear algebra, signal processing, sorting, MPI, … Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self-tuning compilers, continuous program optimization
113
Future Directions (A Bag of Flaky Ideas)
Composable code generators and search spaces New application domains PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels rich mathematical structure germane to performance; lots of hardware New tuning environments Parallel, Grid, “whole systems” Statistical models of application performance Statistical learning of concise parametric models from traces for architectural evaluation Compiler/automatic derivation of parametric models
114
Possible Future Work Different Architectures Different Kernels
New FP instruction sets (SSE2) SMP / multicore platforms Vector architectures Different Kernels Higher Level Algorithms Parallel versions of kenels, with optimized communication Block algorithms (eg Lanczos) XBLAS (extra precision) Produce Benchmarks Augment HPCC Benchmark Make it possible to combine optimizations easily for any kernel Related tuning activities (LAPACK & ScaLAPACK)
115
Review of Tuning by Illustration
(Extra Slides)
116
Splitting for Variable Blocks and Diagonals
Decompose A = A1 + A2 + … At Detect “canonical” structures (sampling) Split Tune each Ai Improve performance and save storage New data structures Unaligned block CSR Relax alignment in rows & columns Row-segmented diagonals
117
Example: Variable Block Row (Matrix #12)
over CSR 1.8x over RB
118
Example: Row-Segmented Diagonals
over CSR This slide is animated.
119
Mixed Diagonal and Block Structure
120
Summary Automated block size selection Not fully automated
Empirical modeling and search Register blocking for SpMV, triangular solve, ATA*x Not fully automated Given a matrix, select splittings and transformations Lots of combinatorial problems TSP reordering to create dense blocks (Pinar ’97; Moon, et al. ’04) TO DO: Redo!
121
Extra Slides
122
A Sparse Matrix You Encounter Every Day
Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your inbox.) You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.
123
Problem Context Sparse kernels abound Historical trends
Models of buildings, cars, bridges, economies, … Google PageRank algorithm Historical trends Sparse matrix-vector multiply (SpMV): 10% of peak 2x faster with “hand-tuning” Tuning becoming more difficult over time Promise of automatic tuning: PHiPAC/ATLAS, FFTW, … Challenges to high-performance Not dense linear algebra! Complex data structures: indirect, irregular memory access Performance depends strongly on run-time inputs Historical trends: we’ll look at the data that supports these claims shortly
124
Key Questions, Ideas, Conclusions
How to tune basic sparse kernels automatically? Empirical modeling and search Up to 4x speedups for SpMV 1.8x for triangular solve 4x for ATA*x; 2x for A2*x 7x for multiple vectors What are the fundamental limits on performance? Kernel-, machine-, and matrix-specific upper bounds Achieve 75% or more for SpMV, limiting low-level tuning Consequences for architecture? General techniques for empirical search-based tuning? Statistical models of performance Reference implementations based on CSR format (to be discussed)
125
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance
126
Compressed Sparse Row (CSR) Storage
A one-slide crash course on a basic sparse matrix data structure (compressed sparse row, or CSR, format) and a basic kernel: matrix-vector multiply. In CSR: store each non-zero value, but also need extra index for each non-zero less than dense, but higher “rate” of storage per non-zero. Matrix-vector multiply (MVM): no re-use of A, just of vectors x and y, taken to be dense. MVM with CSR storage: indirect/irregular accesses to x! So what you’d like to do is: Unroll the k loop but you need to know how many non-zeors per row Hoist y[i] not a problem, absent aliasing Eliminate ind[i] it’s an extra load, but you need to know the non-zero pattern Reuse elements of x but you need to know the non-zero pattern Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]]
127
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance
128
Historical Trends in SpMV Performance
The Data Uniprocessor SpMV performance since 1987 “Untuned” and “Tuned” implementations Cache-based superscalar micros; some vectors LINPACK
129
SpMV Historical Trends: Mflop/s
130
Example: The Difficulty of Tuning
nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem FEM discretization from a structural analysis problem. Source: Matrix “olafu”, provided by NASA via the University of Florida Sparse Matrix Collection
131
Still More Surprises More complicated non-zero structure in general
An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection
132
Still More Surprises More complicated non-zero structure in general
Example: 3x3 blocking Logical grid of 3x3 cells Suppose we know we want to use 3x3 blocking. SPARSITY lays a logical 3x3 grid on the matrix…
133
Historical Trends: Mixed News
Observations Good news: Moore’s law like behavior Bad news: “Untuned” is 10% peak or less, worsening Good news: “Tuned” roughly 2x better today, and improving Bad news: Tuning is complex (Not really news: SpMV is not LINPACK) Questions Application: Automatic tuning? Architect: What machines are good for SpMV?
134
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques SpMV [SC’02; IJHPCA ’04b] Sparse triangular solve (SpTS) [ICS/POHLL ’02] ATA*x [ICCS/WoPLA ’03] Upper bounds on performance Statistical models of performance
135
SPARSITY: Framework for Tuning SpMV
SPARSITY: Automatic tuning for SpMV [Im & Yelick ’99] General approach Identify and generate implementation space Search space using empirical models & experiments Prototype library and heuristic for choosing register block size Also: cache-level blocking, multiple vectors What’s new? New block size selection heuristic Within 10% of optimal — replaces previous version Expanded implementation space Variable block splitting, diagonals, combinations New kernels: sparse triangular solve, ATA*x, Ar*x
136
Automatic Register Block Size Selection
Selecting the r x c block size Off-line benchmark: characterize the machine Precompute Mflops(r,c) using dense matrix for each r x c Once per machine/architecture Run-time “search”: characterize the matrix Sample A to estimate Fill(r,c) for each r x c Run-time heuristic model Choose r, c to maximize Mflops(r,c) / Fill(r,c) Run-time costs Up to ~40 SpMVs (empirical worst case) Using this approach, we do choose the right block sizes for the two previous examples.
137
Accuracy of the Tuning Heuristics (1/4)
DGEMV NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)
138
Accuracy of the Tuning Heuristics (2/4)
DGEMV
139
Accuracy of the Tuning Heuristics (3/4)
DGEMV
140
Accuracy of the Tuning Heuristics (4/4)
DGEMV
141
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance SC’02 Statistical models of performance
142
Motivation for Upper Bounds Model
Questions Speedups are good, but what is the speed limit? Independent of instruction scheduling, selection What machines are “good” for SpMV?
143
Upper Bounds on Performance: Blocked SpMV
P = (flops) / (time) Flops = 2 * nnz(A) Lower bound on time: Two main assumptions 1. Count memory ops only (streaming) 2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge min access “latency” ai at Li cache & amem e.g., Saavedra-Barrera and PMaC MAPS benchmarks
144
Example: Bounds on Itanium 2
145
Example: Bounds on Itanium 2
146
Example: Bounds on Itanium 2
147
Fraction of Upper Bound Across Platforms
148
Achieved Performance and Machine Balance
Machine balance [Callahan ’88; McCalpin ’95] Balance = Peak Flop Rate / Bandwidth (flops / double) Ideal balance for mat-vec: £ 2 flops / double For SpMV, even less SpMV ~ streaming 1 / (avg load time to stream 1 array) ~ (bandwidth) “Sustained” balance = peak flops / model bandwidth
150
Where Does the Time Go? Most time assigned to memory
Caches “disappear” when line sizes are equal Strictly increasing line sizes
151
Execution Time Breakdown: Matrix 40
TODO: Update this figure from thesis
152
Speedups with Increasing Line Size
153
Summary: Performance Upper Bounds
What is the best we can do for SpMV? Limits to low-level tuning of blocked implementations Refinements? What machines are good for SpMV? Partial answer: balance characterization Architectural consequences? Example: Strictly increasing line sizes
154
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance FDO ’00; IJHPCA ’04a
155
Statistical Models for Automatic Tuning
Idea 1: Statistical criterion for stopping a search A general search model Generate implementation Measure performance Repeat Stop when probability of being within e of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models Problem: Choose 1 among m implementations at run-time Sample performance off-line, build statistical model
156
Example: Select a Matmul Implementation
157
Example: Support Vector Classification
158
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance Summary and Future Work
159
Summary of High-Level Themes
“Kernel-centric” optimization Vs. basic block, trace, path optimization, for instance Aggressive use of domain-specific knowledge Performance bounds modeling Evaluating software quality Architectural characterizations and consequences Empirical search Hybrid on-line/run-time models Statistical performance models Exploit information from sampling, measuring
160
Related Work My bibliography: 337 entries so far
Sample area 1: Code generation Generative & generic programming Sparse compilers Domain-specific generators Sample area 2: Empirical search-based tuning Kernel-centric linear algebra, signal processing, sorting, MPI, … Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self-tuning compilers, continuous program optimization
161
Future Directions (A Bag of Flaky Ideas)
Composable code generators and search spaces New application domains PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels rich mathematical structure germane to performance; lots of hardware New tuning environments Parallel, Grid, “whole systems” Statistical models of application performance Statistical learning of concise parametric models from traces for architectural evaluation Compiler/automatic derivation of parametric models
162
Acknowledgements Super-advisors: Jim and Kathy
Undergraduate R.A.s: Attila, Ben, Jen, Jin, Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh See pages xvi—xvii of dissertation.
163
TSP-based Reordering: Before
(Pinar ’97; Moon, et al ‘04)
164
TSP-based Reordering: After
(Pinar ’97; Moon, et al ‘04) Up to 2x speedups over CSR
165
Example: L2 Misses on Itanium 2
Misses measured using PAPI [Browne ’00]
166
Example: Distribution of Blocked Non-Zeros
TO DO: Replace this with ex11 spy plot
167
Register Profile: Itanium 2
1190 Mflop/s 190 Mflop/s
168
Register Profiles: Sun and Intel x86
Ultra 2i - 11% 72 Mflop/s Ultra 3 - 5% 90 Mflop/s Register Profiles: Sun and Intel x86 35 Mflop/s 50 Mflop/s Pentium III - 21% Pentium III-M - 15% 108 Mflop/s 122 Mflop/s 42 Mflop/s 58 Mflop/s
169
Register Profiles: IBM and Intel IA-64
Power3 - 17% 252 Mflop/s Power4 - 16% 820 Mflop/s Register Profiles: IBM and Intel IA-64 122 Mflop/s 459 Mflop/s Itanium 1 - 8% Itanium % 247 Mflop/s 1.2 Gflop/s 107 Mflop/s 190 Mflop/s
170
Accurate and Efficient Adaptive Fill Estimation
Idea: Sample matrix Fraction of matrix to sample: s Î [0,1] Cost ~ O(s * nnz) Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals Idea: Monitor variance Cost of tuning Lower bound: convert matrix in 5 to 40 unblocked SpMVs Heuristic: 1 to 11 SpMVs
171
Sparse/Dense Partitioning for SpTS
Partition L into sparse (L1,L2) and dense LD: Perform SpTS in three steps: Sparsity optimizations for (1)—(2); DTRSV for (3) Tuning parameters: block size, size of dense triangle
172
SpTS Performance: Power3
174
Summary of SpTS and AAT*x Results
SpTS — Similar to SpMV 1.8x speedups; limited benefit from low-level tuning AATx, ATAx Cache interleaving only: up to 1.6x speedups Reg + cache: up to 4x speedups 1.8x speedup over register only Similar heuristic; same accuracy (~ 10% optimal) Further from upper bounds: 60—80% Opportunity for better low-level tuning a la PHiPAC/ATLAS Matrix triple products? Ak*x? Preliminary work
175
Register Blocking: Speedup
176
Register Blocking: Performance
177
Register Blocking: Fraction of Peak
178
Example: Confidence Interval Estimation
179
Costs of Tuning
180
Splitting + UBCSR: Pentium III
181
Splitting + UBCSR: Power4
182
Splitting+UBCSR Storage: Power4
184
Example: Variable Block Row (Matrix #13)
185
Dense Tuning is Hard, Too
Even dense matrix multiply can be notoriously difficult to tune
186
2-D cross section of 3-D space.
Dense matrix multiply: surprising performance as register tile size varies.
188
Preliminary Results (Matrix Set 2): Itanium 2
Max speedups are several times faster than on earlier architectures (c.f., 200 Mflop/s on Itanium 1-800, 150 Mflop/s on Power 3, 300 Mflop/s on Pentium GHz). Categories: Matrix 0-2: dense, dense lower triangle, dense upper triangle Matrix 3-18: FEM (uniform blocking) Matrix 19-38: FEM (variable blocking Matrix 39-42: Protein modeling Matrix 43: Quantum chromodynamics calculation Matrix 44-46: Chemistry and chemical engineering Matrix 47-49: Circuits Matrix 50-57: Macroeconomic modeling Matrix 58-66: Statistical modeling Matrix 67-68: Finite-field problems Matrix 69-77: Linear programming Matrix 78-81: Information retrieval Web/IR Dense FEM FEM (var) Bio Econ Stat LP
189
Multiple Vector Performance
On a dense matrix in sparse format, with multiple vectors we can get close to peak machine speed (up to 3 Gflop/s, peak on this machine is 3.6 Gflop/s).
190
“40: webdoc” is the LSI matrix
“40: webdoc” is the LSI matrix. (Figure taken from IJHPCA paper, to appear) “41—42” are linear programming matrices. 43 is also linear programming (Scheduling problem from Italian Railways) NOTE: Matrix numbers (40—44) not consistent with previous slide.
191
What about the Google Matrix?
Google approach Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix At query-time: return list of pages ordered by rank Matrix: A = aG + (1-a)(1/n)uuT Markov model: Surfer follows link with probability a, jumps to a random page with probability 1-a G is n x n connectivity matrix [n » 3 billion] gij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) u is a vector of all 1 values Steady-state probability xi of landing on page i is solution to x = Ax Approximate x by power method: x = Akx0 In practice, k » 25 The matrix to which Google applies the power method is the sum of the connectivity graph matrix G plus a rank one update. According to Cleve’s corner, alpha ~= 85. Macroscopic connectivity structure: Kumar, et al. “The web as a graph.” PODS Size of G as of May ’02: Cleve’s corner. “The world’s largest sparse matrix computation.” “k = 25”: Taher Stanford (student of Jeff Ullman), private communication: “For 120M pages, [the computation takes] about 5 hours for 25 iterations, no parallelization. The limiting factor is the disk speed for sequentially reading in the 8 GB matrix (it is assumed to be larger than main memory).” [13 Mar ’02]
192
The blue bars compare the effect of just reordering the matrix—no significant improvements.
The red bars compare the effect of combined reordering and register blocking. Each red bar is labeled by the block size chosen, and by the fill (in parentheses). We are currently looking at splitting the matrix to reduce this fill overhead while sustaining the performance improvement. In terms of storage cost (MB for double precision values and 32-bit integers): Size(natural, RCM, MMD, no blocking) ~= 35.5 MB Size(natural, 2x1) ~= 44.3 MB, or ~1.25x Size(RCM, 3x1) ~= 54.9 MB, or ~1.55x Size(MMD, 4x1) ~= 66.3 MB, or ~1.87x Let k = no. of non-zeros, f be the fill, and r x c be the block size. Also suppose sizeof(floating point data type) = sizeof(integer data type). Then the size of CSR storage is sizeof(A, ‘csr’) = 2k (ignoring row pointers) sizeof(A, ‘rxc’) = kf * (1+1/(rc)) So we expect a storage savings even with fill when sizeof(A, ‘rxc’) < sizeof(A, ‘csr’): kf * (1 + 1/(rc)) < 2k f < 2 / (1 + (1/rc)) As rc infinity, this condition becomes f < 2. (This case corresponds to the floating point values and integers being both 32-bit or both 64-bit, for example.) If sizeof(real) = 2*sizeof(int), then this condition becomes f < 1.5 as rc infinity. (This case corresponds to storing the values in double precision, but storing the indices in 32-bit integers.) If sizeof(real) = 0.5*sizeof(int), then this condition becomes f < 3 as rc infinity. (This case corresponds to storing the values in single precision, but the indices in 64-bit integers.)
193
MAPS Benchmark Example: Power4
194
MAPS Benchmark Example: Itanium 2
195
Saavedra-Barrera Example: Ultra 2i
196
I think we can mention the bounds without talking about them in detail
I think we can mention the bounds without talking about them in detail. The main point is that this picture is a guide to what performance we get without blocking (reference) and what we might expect (bounds). Note that absolute performance is highly dependent on matrix structure.
197
Summary of Results: Pentium III
Plot of the best block size (exhaustive search). Points: It helps for some matrices and not others, but not blocking is part of the implementation space so we never do worse. We’re close to the bounds. We should emphasize that the Intel compiler is generating the code here, so kudos to the Intel compiler group.
198
Summary of Results: Pentium III (3/3)
And our heuristic always chooses a good block size (includes 1x1) in practice. Similar results for other platforms in the “Extra Slides” section.
199
Execution Time Breakdown (PAPI): Matrix 40
200
Preliminary Results (Matrix Set 1): Itanium 2
Dense FEM FEM (var) Assorted LP
201
Tuning Sparse Triangular Solve (SpTS)
Compute x=L-1*b where L sparse lower triangular, x & b dense L from sparse LU has rich dense substructure Dense trailing triangle can account for 20—90% of matrix non-zeros SpTS optimizations Split into sparse trapezoid and dense trailing triangle Use tuned dense BLAS (DTRSV) on dense triangle Use Sparsity register blocking on sparse part Tuning parameters Size of dense trailing triangle Register block size
202
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
203
Cache Blocked SpMV on LSI Matrix: Ultra 2i
10k x 255k 3.7M non-zeros Baseline: 16 Mflop/s Best block size & performance: 16k x 64k 28 Mflop/s Best block size on this platform: Optimal size is machine dependent.
204
Cache Blocking on LSI Matrix: Pentium 4
10k x 255k 3.7M non-zeros Baseline: 44 Mflop/s Best block size & performance: 16k x 16k 210 Mflop/s LSI matrix has nearly randomly distributed non-zeros.
205
Cache Blocked SpMV on LSI Matrix: Itanium
10k x 255k 3.7M non-zeros Baseline: 25 Mflop/s Best block size & performance: 16k x 32k 72 Mflop/s
206
Cache Blocked SpMV on LSI Matrix: Itanium 2
10k x 255k 3.7M non-zeros Baseline: 170 Mflop/s Best block size & performance: 16k x 65k 275 Mflop/s
207
Inter-Iteration Sparse Tiling (1/3)
x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij x2 t2 y2 This generalization of Strout algorithm for an arbitrary Akx stolen from my quals talk. x3 t3 y3 x4 t4 y4 x5 t5 y5
208
Inter-Iteration Sparse Tiling (2/3)
x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij Orange = everything needed to compute y1 Reuse a11, a12 x2 t2 y2 x3 t3 y3 x4 t4 y4 x5 t5 y5
209
Inter-Iteration Sparse Tiling (3/3)
x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij Orange = everything needed to compute y1 Reuse a11, a12 Grey = y2, y3 Reuse a23, a33, a43 x2 t2 y2 Depending on cache placement policies, could reuse even more grey edges since they were read on previous iteration. x3 t3 y3 x4 t4 y4 x5 t5 y5
210
Inter-Iteration Sparse Tiling: Issues
x1 t1 y1 Tile sizes (colored regions) grow with no. of iterations and increasing out-degree G likely to have a few nodes with high out-degree (e.g., Yahoo) Mathematical tricks to limit tile size? Judicious dropping of edges [Ng’01] x2 t2 y2 Idea of dropping edges follows from the following paper. Andrew Ng, Alice Zheng, Michael I. Jordan. “Link Analysis, Eigenvectors, and Stability.” IJCAI, 2001. x3 t3 y3 x4 t4 y4 x5 t5 y5
211
Summary and Questions Need to understand matrix structure and machine
BeBOP: suite of techniques to deal with different sparse structures and architectures Google matrix problem Established techniques within an iteration Ideas for inter-iteration optimizations Mathematical structure of problem may help Questions Structure of G? What are the computational bottlenecks? Enabling future computations? E.g., topic-sensitive PageRank multiple vector version [Haveliwala ’02] See for more info, including more complete Itanium 2 results.
212
Exploiting Matrix Structure
Symmetry (numerical or structural) Reuse matrix entries Can combine with register blocking, multiple vectors, … Matrix splitting Split the matrix, e.g., into r x c and 1 x 1 No fill overhead Large matrices with random structure E.g., Latent Semantic Indexing (LSI) matrices Technique: cache blocking Store matrix as 2i x 2j sparse submatrices Effective when x vector is large Currently, search to find fastest size Symmetry could be numerical or structural. In matrix splitting, we try to overcome the issues with too much fill but splitting into “true” r x c blocks (i.e., without fill) and the remainder in conventional (CSR) storage.
213
Symmetric SpMV Performance: Pentium 4
214
SpMV with Split Matrices: Ultra 2i
215
Cache Blocking on Random Matrices: Itanium
Speedup on four banded random matrices.
216
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
217
Register Blocked SpMV: Pentium III
218
Register Blocked SpMV: Ultra 2i
219
Register Blocked SpMV: Power3
220
Register Blocked SpMV: Itanium
221
Possible Optimization Techniques
Within an iteration, i.e., computing (G+uuT)*x once Cache block G*x On linear programming matrices and matrices with random structure (e.g., LSI), 1.5—4x speedups Best block size is matrix and machine dependent Reordering and/or splitting of G to separate dense structure (rows, columns, blocks) Between iterations, e.g., (G+uuT)2x (G+uuT)2x = G2x + (Gu)uTx + u(uTG)x + u(uTu)uTx Compute Gu, uTG, uTu once for all iterations G2x: Inter-iteration tiling to read G only once We have established techniques for efficiently computing within an iteration. What about computing across iterations?
222
Multiple Vector Performance: Itanium
223
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
224
SpTS Performance: Itanium
(See POHLL ’02 workshop paper, at ICS ’02.)
225
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
226
Optimizing AAT*x Kernel: y=AAT*x, where A is sparse, x & y dense
Arises in linear programming, computation of SVD Conventional implementation: compute z=AT*x, y=A*z Elements of A can be reused: When ak represent blocks of columns, can apply register blocking.
227
Optimized AAT*x Performance: Pentium III
228
Current Directions Applying new optimizations
Other split data structures (variable block, diagonal, …) Matrix reordering to create block structure Structural symmetry New kernels (triple product RART, powers Ak, …) Tuning parameter selection Building an automatically tuned sparse matrix library Extending the Sparse BLAS Leverage existing sparse compilers as code generation infrastructure More thoughts on this topic tomorrow
229
Related Work Automatic performance tuning systems Code generation
PHiPAC [Bilmes, et al., ’97], ATLAS [Whaley & Dongarra ’98] FFTW [Frigo & Johnson ’98], SPIRAL [Pueschel, et al., ’00], UHFFT [Mirkovic and Johnsson ’00] MPI collective operations [Vadhiyar & Dongarra ’01] Code generation FLAME [Gunnels & van de Geijn, ’01] Sparse compilers: [Bik ’99], Bernoulli [Pingali, et al., ’97] Generic programming: Blitz++ [Veldhuizen ’98], MTL [Siek & Lumsdaine ’98], GMCL [Czarnecki, et al. ’98], … Sparse performance modeling [Temam & Jalby ’92], [White & Saddayappan ’97], [Navarro, et al., ’96], [Heras, et al., ’99], [Fraguela, et al., ’99], …
230
More Related Work Compiler analysis, models Sparse BLAS interfaces
CROPS [Carter, Ferrante, et al.]; Serial sparse tiling [Strout ’01] TUNE [Chatterjee, et al.] Iterative compilation [O’Boyle, et al., ’98] Broadway compiler [Guyer & Lin, ’99] [Brewer ’95], ADAPT [Voss ’00] Sparse BLAS interfaces BLAST Forum (Chapter 3) NIST Sparse BLAS [Remington & Pozo ’94]; SparseLib++ SPARSKIT [Saad ’94] Parallel Sparse BLAS [Fillipone, et al. ’96]
231
Context: Creating High-Performance Libraries
Application performance dominated by a few computational kernels Today: Kernels hand-tuned by vendor or user Performance tuning challenges Performance is a complicated function of kernel, architecture, compiler, and workload Tedious and time-consuming Successful automated approaches Dense linear algebra: ATLAS/PHiPAC Signal processing: FFTW/SPIRAL/UHFFT
232
Cache Blocked SpMV on LSI Matrix: Itanium
233
Sustainable Memory Bandwidth
234
Multiple Vector Performance: Pentium 4
235
Multiple Vector Performance: Itanium
236
Multiple Vector Performance: Pentium 4
237
Optimized AAT*x Performance: Ultra 2i
238
Optimized AAT*x Performance: Pentium 4
239
Tuning Pays Off—PHiPAC
240
Tuning pays off – ATLAS Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)
241
Register Tile Sizes (Dense Matrix Multiply)
333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color-coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest A 2-D slice of the 3-D core matmul space, with all other generator options fixed. This shows the performance (Mflop/s) of various core matmul implementations on a small in-L2 cache workload. The “best” is the dark red square at (m0=2,k0=1,n0=8), which achieved 620 Mflop/s. This experiment was performed on the Sun Ultra-10 workstation (333 MHz Ultra II processor, 2 MB L2 cache) using the Sun cc compiler v5.0 with the flags -dalign -xtarget=native -xO5 -xarch=v8plusa. The space is discrete and highly irregular. (This is not even the worst example!)
242
High Precision GEMV (XBLAS)
243
High Precision Algorithms (XBLAS)
Double-double (High precision word represented as pair of doubles) Many variations on these algorithms; we currently use Bailey’s Exploiting Extra-wide Registers Suppose s(1) , … , s(n) have f-bit fractions, SUM has F>f bit fraction Consider following algorithm for S = Si=1,n s(i) Sort so that |s(1)| |s(2)| … |s(n)| SUM = 0, for i = 1 to n SUM = SUM + s(i), end for, sum = SUM Theorem (D., Hida) Suppose F<2f (less than double precision) If n 2F-f + 1, then error 1.5 ulps If n = 2F-f + 2, then error 22f-F ulps (can be 1) If n 2F-f + 3, then error can be arbitrary (S 0 but sum = 0 ) Examples s(i) double (f=53), SUM double extended (F=64) accurate if n = 2049 Dot product of single precision x(i) and y(i) s(i) = x(i)*y(i) (f=2*24=48), SUM double extended (F=64) accurate if n = 65537
244
More Extra Slides
245
Opteron (1.4GHz, 2.8GFlop peak)
Nonsymmetric peak = 504 MFlops Symmetric peak = 612 MFlops Beat ATLAS DGEMV’s 365 Mflops
246
Awards Best Paper, Intern. Conf. Parallel Processing, 2004
“Performance models for evaluation and automatic performance tuning of symmetric sparse matrix-vector multiply” Best Student Paper, Intern. Conf. Supercomputing, Workshop on Performance Optimization via High-Level Languages and Libraries, 2003 Best Student Presentation too, to Richard Vuduc “Automatic performance tuning and analysis of sparse triangular solve” Finalist, Best Student Paper, Supercomputing 2002 To Richard Vuduc “Performance Optimization and Bounds for Sparse Matrix-vector Multiply” Best Presentation Prize, MICRO-33: 3rd ACM Workshop on Feedback-Directed Dynamic Optimization, 2000 “Statistical Modeling of Feedback Data in an Automatic Tuning System”
247
Accuracy of the Tuning Heuristics (4/4)
248
Can Match DGEMV Performance
249
Fraction of Upper Bound Across Platforms
250
Achieved Performance and Machine Balance
Machine balance [Callahan ’88; McCalpin ’95] Balance = Peak Flop Rate / Bandwidth (flops / double) Lower is better (I.e. can hope to get higher fraction of peak flop rate) Ideal balance for mat-vec: £ 2 flops / double For SpMV, even less
252
Where Does the Time Go? Most time assigned to memory
Caches “disappear” when line sizes are equal Strictly increasing line sizes
253
Execution Time Breakdown: Matrix 40
254
Execution Time Breakdown (PAPI): Matrix 40
255
Speedups with Increasing Line Size
256
Summary: Performance Upper Bounds
What is the best we can do for SpMV? Limits to low-level tuning of blocked implementations Refinements? What machines are good for SpMV? Partial answer: balance characterization Architectural consequences? Help to have strictly increasing line sizes
257
Evaluating algorithms and machines for SpMV
Metrics Speedups Mflop/s (“fair” flops) Fraction of peak Questions Speedups are good, but what is “the best?” Independent of instruction scheduling, selection Can SpMV be further improved or not? What machines are “good” for SpMV?
258
Tuning Dense BLAS —PHiPAC
259
Tuning Dense BLAS– ATLAS
Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)
260
Fast algorithms are rare.
Consider Intel Pentium 4/1500: First circle: only 10% of codes run at >= 63% of peak Second circle: only 1% of codes run at >= 73% of peak Third circle: only .2% of codes run at >= 90% of peak
261
Statistical Models for Automatic Tuning
Idea 1: Statistical criterion for stopping a search A general search model Generate implementation Measure performance Repeat Stop when probability of being within e of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models Problem: Choose 1 among m implementations at run-time Sample performance off-line, build statistical model
262
SpMV Historical Trends: Fraction of Peak
263
Motivation for Automatic Performance Tuning of SpMV
Historical trends Sparse matrix-vector multiply (SpMV): 10% of peak or less Performance depends on machine, kernel, matrix Matrix known at run-time Best data structure + implementation can be surprising Our approach: empirical performance modeling and algorithm search Historical trends: we’ll look at the data that supports these claims shortly
264
Accuracy of the Tuning Heuristics (3/4)
265
Accuracy of the Tuning Heuristics (3/4)
DGEMV
266
Sparse CALA Summary Optimizing one SpMV too limiting
Take k steps of Krylov subspace method GMRES, CG, Lanczos, Arnoldi Assume matrix “well-partitioned,” with modest surface-to-volume ratio Parallel implementation Conventional: O(k log p) messages CALA: O(log p) messages - optimal Serial implementation Conventional: O(k) moves of data from slow to fast memory CALA: O(1) moves of data – optimal Can incorporate some preconditioners Need to be able to “compress” interactions between distant i, j Hierarchical, semiseparable matrices … Lots of speed up possible (measured and modeled) Price: some redundant computation
267
Potential Impact on Applications: T3P
Application: accelerator design [Ko] 80% of time spent in SpMV Relevant optimization techniques Symmetric storage Register blocking On Single Processor Itanium 2 1.68x speedup 532 Mflops, or 15% of 3.6 GFlop peak 4.4x speedup with multiple (8) vectors 1380 Mflops, or 38% of peak
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.