Download presentation
Presentation is loading. Please wait.
Published byJune Hoover Modified over 9 years ago
1
James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr12
CS267 – Lecture 14 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel
2
Outline Motivation for Automatic Performance Tuning
Results for sparse matrix kernels OSKI = Optimized Sparse Kernel Interface pOSKI for multicore Tuning Higher Level Algorithms Future Work, Class Projects BeBOP: Berkeley Benchmarking and Optimization Group Many results shown from current and former members Meet weekly T 12:30-2, in 380 Soda
3
Motivation for Automatic Performance Tuning
Writing high performance software is hard Make programming easier while getting high speed Ideal: program in your favorite high level language (Matlab, Python, PETSc…) and get a high fraction of peak performance Reality: Best algorithm (and its implementation) can depend strongly on the problem, computer architecture, compiler,… Best choice can depend on knowing a lot of applied mathematics and computer science How much of this can we teach? How much of this can we automate?
4
Examples of Automatic Performance Tuning (1)
Dense BLAS Sequential PHiPAC (UCB), then ATLAS (UTK) (used in Matlab) math-atlas.sourceforge.net/ Internal vendor tools Fast Fourier Transform (FFT) & variations Sequential and Parallel FFTW (MIT) Digital Signal Processing SPIRAL: (CMU) Communication Collectives (UCB, UTK) Rose (LLNL), Bernoulli (Cornell), Telescoping Languages (Rice), … More projects, conferences, government reports, … Start-up based on SPIRAL?
5
Examples of Automatic Performance Tuning (2)
What do dense BLAS, FFTs, signal processing, MPI reductions have in common? Can do the tuning off-line: once per architecture, algorithm Can take as much time as necessary (hours, a week…) At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc.
6
Tuning Register Tile Sizes (Dense Matrix Multiply)
333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color-coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest A 2-D slice of the 3-D core matmul space, with all other generator options fixed. This shows the performance (Mflop/s) of various core matmul implementations on a small in-L2 cache workload. The “best” is the dark red square at (m0=2,k0=1,n0=8), which achieved 620 Mflop/s. This experiment was performed on the Sun Ultra-10 workstation (333 MHz Ultra II processor, 2 MB L2 cache) using the Sun cc compiler v5.0 with the flags -dalign -xtarget=native -xO5 -xarch=v8plusa. The space is discrete and highly irregular. (This is not even the worst example!) Needle in a haystack
7
Example: Select a Matmul Implementation
8
Example: Support Vector Classification
Paper: Statistical Models for Empirical Search-Based Performance Tuning (International Journal of High Performance Computing Applications, 18 (1), pp , February 2004) Richard Vuduc, James W. Demmel, and Jeff A. Bilmes. Archana Ganapathi, Computer Science PhD Thesis, University of California, Berkeley. UCB//EECS Predicting and Optimizing System Utilization and Performance via Statistical Machine Learning
9
Machine Learning in Automatic Performance Tuning
References Statistical Models for Empirical Search-Based Performance Tuning (International Journal of High Performance Computing Applications, 18 (1), pp , February 2004) Richard Vuduc, J. Demmel, and Jeff A. Bilmes. Predicting and Optimizing System Utilization and Performance via Statistical Machine Learning (Computer Science PhD Thesis, University of California, Berkeley. UCB//EECS ) Archana Ganapathi
10
Examples of Automatic Performance Tuning (3)
What do dense BLAS, FFTs, signal processing, MPI reductions have in common? Can do the tuning off-line: once per architecture, algorithm Can take as much time as necessary (hours, a week…) At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc. Can’t always do off-line tuning Algorithm and implementation may strongly depend on data only known at run-time Ex: Sparse matrix nonzero pattern determines both best data structure and implementation of Sparse-matrix-vector-multiplication (SpMV) Part of search for best algorithm just be done (very quickly!) at run-time
11
Source: Accelerator Cavity Design Problem (Ko via Husbands)
12
Linear Programming Matrix
First 20,000 columns of 4k x 1M matrix, rail4284 …
13
A Sparse Matrix You Encounter Every Day
You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.
14
SpMV with Compressed Sparse Row (CSR) Storage
A one-slide crash course on a basic sparse matrix data structure (compressed sparse row, or CSR, format) and a basic kernel: matrix-vector multiply. In CSR: store each non-zero value, but also need extra index for each non-zero less than dense, but higher “rate” of storage per non-zero. Sparse-matrix-vector multiply (SpMV): no re-use of A, just of vectors x and y, taken to be dense. SpMV with CSR storage: indirect/irregular accesses to x! So what you’d like to do is: Unroll the k loop but you need to know how many non-zeros per row Hoist y[i] not a problem, absent aliasing Eliminate ind[i] it’s an extra load, but you need to know the non-zero pattern Reuse elements of x but you need to know the non-zero pattern Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1]-1 do y[i] = y[i] + val[k]*x[ind[k]] Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1]-1 do y[i] = y[i] + val[k]*x[ind[k]]
15
Example: The Difficulty of Tuning
nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem FEM discretization from a structural analysis problem. Source: Matrix “raefsky”, provided by NASA via the University of Florida Sparse Matrix Collection
16
Example: The Difficulty of Tuning
nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem discretization from a structural analysis problem. Source: Matrix “raefsky”, provided by NASA via the University of Florida Sparse Matrix Collection 8x8 dense substructure
17
Taking advantage of block structure in SpMV
Bottleneck is time to get matrix from memory Only 2 flops for each nonzero in matrix Don’t store each nonzero with index, instead store each nonzero r-by-c block with index Storage drops by up to 2x, if rc >> 1, all 32-bit quantities Time to fetch matrix from memory decreases Change both data structure and algorithm Need to pick r and c Need to change algorithm accordingly In example, is r=c=8 best choice? Minimizes storage, so looks like a good idea…
18
Speedups on Itanium 2: The Need for Search
Mflop/s Best: 4x2 [NOTE: This slide has some animation in it.] Experiment: Try all block sizes that divide 8x8—16 implementations in all. Platform: 900 MHz Itanium-2, 3.6 Gflop/s peak speed. Intel v7.0 compiler. Good speedups (4x) but at an unexpected block size (4x2). Figure taken from Im, Yelick, Vuduc, IJHPCA paper, to appear. Reference Mflop/s
19
Register Profile: Itanium 2
1190 Mflop/s 190 Mflop/s
20
SpMV Performance (Matrix #2): Generation 1
Power3 - 13% 195 Mflop/s Power4 - 14% 703 Mflop/s SpMV Performance (Matrix #2): Generation 1 100 Mflop/s 469 Mflop/s Itanium 1 - 7% Itanium % 225 Mflop/s 1.1 Gflop/s 103 Mflop/s 276 Mflop/s
21
Register Profiles: IBM and Intel IA-64
Power3 - 17% 252 Mflop/s Power4 - 16% 820 Mflop/s Register Profiles: IBM and Intel IA-64 122 Mflop/s 459 Mflop/s Itanium 1 - 8% Itanium % 247 Mflop/s 1.2 Gflop/s 107 Mflop/s 190 Mflop/s
22
SpMV Performance (Matrix #2): Generation 2
Ultra 2i - 9% 63 Mflop/s Ultra 3 - 5% 109 Mflop/s SpMV Performance (Matrix #2): Generation 2 35 Mflop/s 53 Mflop/s Pentium III - 19% Pentium III-M - 15% 96 Mflop/s 120 Mflop/s 42 Mflop/s 58 Mflop/s
23
Register Profiles: Sun and Intel x86
Ultra 2i - 11% 72 Mflop/s Ultra 3 - 5% 90 Mflop/s Register Profiles: Sun and Intel x86 35 Mflop/s 50 Mflop/s Pentium III - 21% Pentium III-M - 15% 108 Mflop/s 122 Mflop/s 42 Mflop/s 58 Mflop/s
24
Another example of tuning challenges
More complicated non-zero structure in general N = 16614 NNZ = 1.1M An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection
25
Zoom in to top corner More complicated non-zero structure in general
NNZ = 1.1M An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection
26
3x3 blocks look natural, but…
More complicated non-zero structure in general Example: 3x3 blocking Logical grid of 3x3 cells But would lead to lots of “fill-in” Suppose we know we want to use 3x3 blocking. SPARSITY lays a logical 3x3 grid on the matrix…
27
Extra Work Can Improve Efficiency!
More complicated non-zero structure in general Example: 3x3 blocking Logical grid of 3x3 cells Fill-in explicit zeros Unroll 3x3 block multiplies “Fill ratio” = 1.5 On Pentium III: 1.5x speedup! Actual mflop rate 1.52 = 2.25 higher The main point is that there can be a considerable pay-off for judicious choice of “fill” (r x c), but that allowing for fill makes the implementation space even more complicated. For this matrix on a Pentium III, we observed a 1.5x speedup, even after filling in an additional 50% explicit zeros. Two effects: Filling in zeros, but eliminating integer indices (overhead) (2) Quality of r x c code produced by compiler may be much better for particular r’s and c’s. In this particular example, overall data structure size stays the same, but 3x3 code is 2x faster than 1x1 code for a dense matrix stored in sparse format.
28
Automatic Register Block Size Selection
Selecting the r x c block size Off-line benchmark Precompute Mflops(r,c) using dense A for each r x c Once per machine/architecture Run-time “search” Sample A to estimate Fill(r,c) for each r x c Run-time heuristic model Choose r, c to minimize time ~ Fill(r,c) / Mflops(r,c) Using this approach, we do choose the right block sizes for the two previous examples.
29
Accurate and Efficient Adaptive Fill Estimation
Idea: Sample matrix Fraction of matrix to sample: s Î [0,1] Cost ~ O(s * nnz) Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals Idea: Monitor variance Cost of tuning Lower bound: convert matrix in 5 to 40 unblocked SpMVs Heuristic: 1 to 11 SpMVs
30
Accuracy of the Tuning Heuristics (1/4)
See p. 375 of Vuduc’s thesis for matrices NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)
31
Accuracy of the Tuning Heuristics (2/4)
32
Accuracy of the Tuning Heuristics (2/4)
DGEMV
33
Upper Bounds on Performance for blocked SpMV
P = (flops) / (time) Flops = 2 * nnz(A) Lower bound on time: Two main assumptions 1. Count memory ops only (streaming) 2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge minimum access “latency” ai at Li cache & amem e.g., Saavedra-Barrera and PMaC MAPS benchmarks Given large variance in performance depending on the matrix, Can we develop a simple performance model that tells us When we’ve done about as well as possible?
34
Example: L2 Misses on Itanium 2
Upper bound: all source vector accesses miss Lower bound: all source vector accesses hit after first Actual close to lower bound => modeling only compulsory misses ok Misses measured using PAPI [Browne ’00]
35
Example: Bounds on Itanium 2
36
Example: Bounds on Itanium 2
37
Example: Bounds on Itanium 2
38
Summary of Other Sequential Performance Optimizations
Optimizations for SpMV Register blocking (RB): up to 4x over CSR Variable block splitting: 2.1x over CSR, 1.8x over RB Diagonals: 2x over CSR Reordering to create dense structure + splitting: 2x over CSR Symmetry: 2.8x over CSR, 2.6x over RB Cache blocking: 2.8x over CSR Multiple vectors (SpMM): 7x over CSR And combinations… Sparse triangular solve Hybrid sparse/dense data structure: 1.8x over CSR Higher-level kernels A·AT·x, AT·A·x: 4x over CSR, 1.8x over RB A2·x: 2x over CSR, 1.5x over RB [A·x, A2·x, A3·x, .. , Ak·x] Items in bold: see Vuduc’s 455 page dissertation Other items: see collaborations with BeBOPpers 2.1x for variable block splitting is just for the FEM matrices with 2 block sizes
39
Example: Sparse Triangular Factor
Raefsky4 (structural problem) + SuperLU + colmmd N=19779, nnz=12.6 M Dense trailing triangle: dim=2268, 20% of total nz Can be as high as 90+%! 1.8x over CSR
40
Cache Optimizations for AAT*x
Cache-level: Interleave multiplication by A, AT Only fetch A from memory once dot product … “axpy” … Register-level: aiT to be r´c block row, or diag row
41
Example: Combining Optimizations (1/2)
Register blocking, symmetry, multiple (k) vectors Three low-level tuning parameters: r, c, v X k v * r c += Y A
42
Example: Combining Optimizations (2/2)
Register blocking, symmetry, and multiple vectors [Ben UCB] Symmetric, blocked, 1 vector Up to 2.6x over nonsymmetric, blocked, 1 vector Symmetric, blocked, k vectors Up to 2.1x over nonsymmetric, blocked, k vectors Up to 7.3x over nonsymmetric, nonblocked, 1 vector Symmetric Storage: up to 64.7% savings Benjamin Lee is Asst Prof at Duke, won an NSF Career Award (2011)
43
Motif/Dwarf: Common Computational Methods (Red Hot Blue Cool)
Some people might subdivide, some might combine them together Trying to stretch a computer architecture then you can do subcategories as well Graph Traversal = Probabilistic Models N-Body = Particle Methods, … 03/08/2012 CS267 Lecture 15
44
Why so much about SpMV? Contents of the “Sparse Motif”
What is “sparse linear algebra”? Direct solvers for Ax=b, least squares Sparse Gaussian elimination, QR for least squares How to choose: crd.lbl.gov/~xiaoye/SuperLU/SparseDirectSurvey.pdf Iterative solvers for Ax=b, least squares, Ax=λx, SVD Used when SpMV only affordable operation on A – Krylov Subspace Methods How to choose For Ax=b: For Ax=λx: What about Multigrid? In overlap of sparse and (un)structured grid motifs – details later
45
How to choose an iterative solver - example
All methods (GMRES, CGS,CG…) depend on SpMV (or variations…) See for details
46
Potential Impact on Applications: Omega3P
Application: accelerator cavity design [Ko] Relevant optimization techniques Symmetric storage Register blocking Reordering, to create more dense blocks Reverse Cuthill-McKee ordering to reduce bandwidth Do Breadth-First-Search, number nodes in reverse order visited Traveling Salesman Problem-based ordering to create blocks Nodes = columns of A Weights(u, v) = no. of nz u, v have in common Tour = ordering of columns Choose maximum weight tour See [Pinar & Heath ’97] 2.1x speedup on Power 4 (caveat: SPMV not bottleneck)
47
Source: Accelerator Cavity Design Problem (Ko via Husbands)
48
Post-RCM Reordering
49
100x100 Submatrix Along Diagonal
50
“Microscopic” Effect of RCM Reordering
Before: Green + Red After: Green + Blue
51
“Microscopic” Effect of Combined RCM+TSP Reordering
Before: Green + Red After: Green + Blue Idea of TSP: Create graph with one vertex per column Edge from vertex i to vertex j is the number of nonzeros in common rows in columns i and j Goal: reorder columns to maximize the number of common nonzeros in adjacent columns (i.e. find “longest path” not “shortest path”, TSP idea still works) Since symmetric, reorder rows in the same way, thereby also maximizing the number of nonzeros in adjacent rows => bigger blocks
52
(Omega3P) Percentages show percent of peak attained.
53
How do permutations affect algorithms?
A = original matrix, AP = A with permuted rows, columns Naïve approach: permute x, multiply y=APx, permute y Faster way to solve Ax=b Write AP = PTAP where P is a permutation matrix Solve APxP = PTb for xP, using SpMV with AP, then let x = PxP Only need to permute vectors twice, not twice per iteration Faster way to solve Ax=λx A and AP have same eigenvalues, no vectors to permute! APxP =λxP implies Ax = λx where x = PxP Where else do optimizations change higher level algorithms? More later…
54
Tuning SpMV on Multicore
55
Multicore SMPs Used Intel Xeon E5345 (Clovertown)
AMD Opteron 2356 (Barcelona) Note superscalar, multithreaded, heterogenity FSB = Front Side Bus MCH = Memory Controller Hub FB-DIMM = fully buffered dual in-line memory module DDR = double data rate XDR DRAM= extreme data rate dynamic random access memory SRI XBAR = system request interface cross-bar Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade Source: Sam Williams
56
Multicore SMPs Used (Conventional cache-based memory hierarchy)
Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade Source: Sam Williams
57
Multicore SMPs Used (Local store-based memory hierarchy)
Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade Source: Sam Williams
58
Multicore SMPs Used (CMT = Chip-MultiThreading)
Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade Source: Sam Williams
59
Multicore SMPs Used (threads)
AMD Opteron 2356 (Barcelona) Intel Xeon E5345 (Clovertown) IBM QS20 Cell Blade Sun T2+ T5140 (Victoria Falls) 8 threads 16* threads 128 threads *SPEs only Source: Sam Williams
60
Multicore SMPs Used (Non-Uniform Memory Access - NUMA)
Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade *SPEs only Source: Sam Williams
61
Multicore SMPs Used (peak double precision flops)
Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) 75 GFlop/s 74 Gflop/s Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade 29* GFlop/s 19 GFlop/s *SPEs only Source: Sam Williams
62
Multicore SMPs Used (Total DRAM bandwidth)
Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) 21 GB/s (read) 10 GB/s (write) 21 GB/s Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade 42 GB/s (read) 21 GB/s (write) 51 GB/s *SPEs only Source: Sam Williams
63
Results from “Auto-tuning Sparse Matrix-Vector Multiplication (SpMV)”
Samuel Williams, Leonid Oliker, Richard Vuduc, John Shalf, Katherine Yelick, James Demmel, "Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms", Supercomputing (SC), 2007.
64
Test matrices Suite of 14 matrices
All bigger than the caches of our SMPs We’ll also include a median performance number 2K x 2K Dense matrix stored in sparse format Dense Well Structured (sorted by nonzeros/row) Protein FEM / Spheres FEM / Cantilever Wind Tunnel FEM / Harbor QCD FEM / Ship Economics Epidemiology Poorly Structured hodgepodge FEM / Accelerator Circuit webbase Extreme Aspect Ratio (linear programming) LP Source: Sam Williams
65
SpMV Parallelization How do we parallelize a matrix-vector multiplication ? Source: Sam Williams
66
SpMV Parallelization How do we parallelize a matrix-vector multiplication ? We could parallelize by columns (sparse matrix time dense sub vector) and in the worst case simplify the random access challenge but: each thread would need to store a temporary partial sum and we would need to perform a reduction (inter-thread data dependency) thread 0 thread 1 thread 2 thread 3 Source: Sam Williams
67
SpMV Parallelization How do we parallelize a matrix-vector multiplication ? We could parallelize by columns (sparse matrix time dense sub vector) and in the worst case simplify the random access challenge but: each thread would need to store a temporary partial sum and we would need to perform a reduction (inter-thread data dependency) thread 0 thread 1 thread 2 thread 3 Source: Sam Williams
68
SpMV Parallelization How do we parallelize a matrix-vector multiplication ? By rows blocks No inter-thread data dependencies, but random access to x thread 0 thread 1 thread 2 thread 3 Source: Sam Williams
69
SpMV Performance (simple parallelization)
Out-of-the box SpMV performance on a suite of 14 matrices Simplest solution = parallelization by rows Scalability isn’t great Can we do better? NOTE: on Cell, I’m just using the PPEs (for portability) Recall # threads: 8 on Xeon 8 on Opteron 128 on Ultrasparc 2 for PPEs, 16 for SPEs Naïve Pthreads Naïve Source: Sam Williams
70
Summary of Multicore Optimizations
NUMA - Non-Uniform Memory Access pin submatrices to memories close to cores assigned to them Prefetch – values, indices, and/or vectors use exhaustive search on prefetch distance Matrix Compression – not just register blocking (BCSR) 32 or 16-bit indices, Block Coordinate format for submatrices Cache-blocking 2D partition of matrix, so needed parts of x,y fit in cache 70
71
NUMA (Data Locality for Matrices)
On NUMA architectures, all large arrays should be partitioned either explicitly (multiple malloc()’s + affinity) implicitly (parallelize initialization and rely on first touch) You cannot partition on granularities less than the page size 512 elements on x86 2M elements on Niagara For SpMV, partition the matrix and perform multiple malloc()’s Pin submatrices so they are co-located with the cores tasked to process them Source: Sam Williams
72
NUMA (Data Locality for Matrices)
Source: Sam Williams
73
Prefetch for SpMV SW prefetch injects more MLP into the memory subsystem. Supplement HW prefetchers Can try to prefetch the values indices source vector or any combination thereof In general, should only insert one prefetch per cache line (works best on unrolled code) for(all rows){ y0 = 0.0; y1 = 0.0; y2 = 0.0; y3 = 0.0; for(all tiles in this row){ PREFETCH(V+i+PFDistance); y0+=V[i ]*X[C[i]] y1+=V[i+1]*X[C[i]] y2+=V[i+2]*X[C[i]] y3+=V[i+3]*X[C[i]] } y[r+0] = y0; y[r+1] = y1; y[r+2] = y2; y[r+3] = y3; Source: Sam Williams
74
SpMV Performance (NUMA and Software Prefetching)
NUMA-aware allocation is essential on memory-bound NUMA SMPs. Explicit software prefetching can boost bandwidth and change cache replacement policies Cell PPEs are likely latency-limited. used exhaustive search for best prefetch distance Source: Sam Williams
75
Matrix Compression Goal: minimize memory traffic Register blocking
Choose block size to minimize memory traffic Only power-of-2 block sizes Simplifies search, achieves most of the possible speedup Shorter indices 32-bit, or 16-bit if possible Different sparse matrix formats BCSR – Block compressed sparse row Like CSR but with register blocks BCOO – Block coordinate Stores row and column index of each register block Better on very sparse sub-blocks (see cache blocking later)
76
ILP/DLP vs Bandwidth In the multicore era, which is the bigger issue?
a lack of ILP/DLP (a major advantage of BCSR) insufficient memory bandwidth per core There are many architectures that when running low arithmetic intensity kernels, there is so little available memory bandwidth per core that you won’t notice a complete lack of ILP Perhaps we should concentrate on minimizing memory traffic rather than maximizing ILP/DLP Rather than benchmarking every combination, just Select the register blocking that minimizes the matrix foot print. Source: Sam Williams
77
Matrix Compression Strategies
Register blocking creates small dense tiles better ILP/DLP reduced overhead per nonzero Let each thread select a unique register blocking In this work, we only considered power-of-two register blocks select the register blocking that minimizes memory traffic Source: Sam Williams
78
Matrix Compression Strategies
Where possible we may encode indices with less than 32 bits We may also select different matrix formats In this work, we considered 16-bit and 32-bit indices (relative to thread’s start) we explored BCSR/BCOO (GCSR in book chapter) Source: Sam Williams
79
SpMV Performance (Matrix Compression)
After maximizing memory bandwidth, the only hope is to minimize memory traffic. Compression: exploit register blocking other formats smaller indices Use a traffic minimization heuristic rather than search Benefit is clearly matrix-dependent. Register blocking enables efficient software prefetching (one per cache line) Source: Sam Williams
80
Cache blocking for SpMV (Data Locality for Vectors)
Store entire submatrices contiguously The columns spanned by each cache block are selected to use same space in cache, i.e. access same number of x(i) TLB blocking is a similar concept but instead of on 8 byte granularities, it uses 4KB granularities thread 0 thread 1 thread 2 thread 3 Source: Sam Williams
81
Cache blocking for SpMV (Data Locality for Vectors)
Store entire submatrices contiguously The columns spanned by each cache block are selected to use same space in cache, i.e. access same number of x(i) TLB blocking is a similar concept but instead of on 8 byte granularities, it uses 4KB granularities Cache-blocking sparse matrices is very different than cache-blocking dense matrices. Rather than changing loop bounds, store entire submatrices contiguously. The columns spanned by each cache block are selected so that all submatrices place the same pressure on the cache i.e. touch the same number of unique source vector cache lines TLB blocking is a similar concept but instead of on 64 byte granularities, it uses 4KB granularities thread 0 thread 1 thread 2 thread 3 Source: Sam Williams
82
Auto-tuned SpMV Performance (cache and TLB blocking)
Fully auto-tuned SpMV performance across the suite of matrices Why do some optimizations work better on some architectures? matrices with naturally small working sets architectures with giant caches matrices with naturally small working sets architectures with giant caches +Cache/LS/TLB Blocking +Matrix Compression +SW Prefetching +NUMA/Affinity Naïve Pthreads Naïve Source: Sam Williams
83
Auto-tuned SpMV Performance (architecture specific optimizations)
Fully auto-tuned SpMV performance across the suite of matrices Included SPE/local store optimized version Why do some optimizations work better on some architectures? +Cache/LS/TLB Blocking +Matrix Compression +SW Prefetching +NUMA/Affinity Naïve Pthreads Naïve Source: Sam Williams
84
Auto-tuned SpMV Performance (max speedup)
Fully auto-tuned SpMV performance across the suite of matrices Included SPE/local store optimized version Why do some optimizations work better on some architectures? 2.9x 35x +Cache/LS/TLB Blocking +Matrix Compression +SW Prefetching +NUMA/Affinity Naïve Pthreads Naïve Source: Sam Williams
85
Optimized Sparse Kernel Interface - pOSKI
Provides sparse kernels automatically tuned for user’s matrix & machine BLAS-style functionality: SpMV, Ax & ATy Hides complexity of run-time tuning Based on OSKI – bebop.cs.berkeley.edu/oski Autotuner for sequential sparse matrix operations: SpMV (Ax and ATx), ATAx, solve sparse triangular systems, … So far pOSKI only does multicore optimizations of SpMV Class projects! Up to 4.5x faster SpMV (Ax) on Intel Sandy Bridge E On-going work by the Berkeley Benchmarking and Optimization (BeBop) group Library interface defines low-level primitives in the style of the Sparse BLAS: sparse matrix-vector multiply, sparse triangular solve. Matrix-vector multiply touches each matrix element only once, whereas our locality-aware kernels can reuse these elements. The BeBOP library includes these kernels: (1) simultaneous computation of A*x, AT*z (2) AT*A*x (3) Ak*x, k is non-negative integer Unlike tuning in the dense case, sparse tuning must occur at run-time since the matrix is unknown until then. Here, the “standard implementation” stores the matrix in compressed sparse row (CSR) format with the kernel coded in C or Fortran compiled with full optimizations. To maximize the impact of our software, we are implementing a new “automatically tuned” matrix type in PETSc. Most PETSc users will be able to use our library with little or no modifications to their source code. The stand-alone version of our library is C and Fortran-callable. “Advanced users” means users willing to program at the level of the BLAS.
86
Optimizations in pOSKI, so far
Fully automatic heuristics for Sparse matrix-vector multiply (Ax, ATx) Register-level blocking, Thread-level blocking SIMD, software prefetching, software pipelining, loop unrolling NUMA-aware allocations “Plug-in” extensibility Very advanced users may write their own heuristics, create new data structures/code variants and dynamically add them to the system Other optimizations under development Cache-level blocking, Reordering (RCM, TSP), variable block structure, index compressing, Symmetric storage, etc. The automatic heuristics have been discussed in published papers available on the BeBOP home page. The BeBOP group has studied a number of other optimizations. Code for these optimizations will be compiled and available to the user. (See “BeBOP-Lua” in our design document.)
87
How the pOSKI Tunes (Overview)
Library Install-Time (offline) Application Run-Time Sample Dense Matrix User’s Matrix User’s hints 1. Partition 1. Build for Target Arch. 2. Benchmark Workload from program monitoring Diagram key: Ovals == actions taken by library Cylinders == data stored by or with the library Solid arrows == control flow Dashed arrows == “data” flow :: At library-installation time :: 1. “Build”: Pre-compile source code. Possible code variants are stored in dynamic libraries (“Code Variants”). 2. “Benchmark”: The installation process measures the speed of the possible code variants and stores the results (“Benchmark Data”) The entire build process uses standard & portable GNU configure. :: At run-time, from within the user’s application :: 1. “Matrix from user”: The user passes her pre-assembled matrix in a standard format like compressed sparse row (CSR) or column (CSC), and the library 2. “Evaluate models”: The library contains a list of “heuristic models.” Each model is actually a procedure that analyzes the matrix, workload, and benchmarking data and chooses a data structure & code it thinks is the best for that matrix and workload. A model is typically specialized to predict tuning parameters for a particular kernel & class of data structures (e.g., predict the block size for register blocked matvec). However, higher-level models (meta-models) that combine several heuristics or predict over several possible data structures and kernels are also possible. In the initial implementation, “Evaluate Models” does the following: * Based on the workload, decide on an allowable amount of time for tuning (a “tuning budget”) * WHILE there is time left for tuning DO - Select and evaluate a model to get best predicted performance & corresponding tuning parameters Submatrix thread Submatrix …. Generated Code Variants Benchmark Data & Selected Code Variants Empirical & Heuristic Search ….. 2. Evaluate Models 2. Evaluate Models 3. Select Data Struct. & Code (r,c,d,imp,…) ….. (r,c) History 3. Select Data Struct. & Code (r,c) = Register Block size (d) = prefetching distance (imp) = SIMD implementation To user: Matrix handle for kernel calls
88
How the pOSKI Tunes (Overview)
At library build/install-time Generate code variants Code generator (Python) generates code variants for various implementations Collect benchmark data Measures and records speed of possible sparse data structure and code variants on target architecture Select best code variants & benchmark data prefetching distance, SIMD implementation Installation process uses standard, portable GNU AutoTools At run-time Library “tunes” using heuristic models Models analyze user’s matrix & benchmark data to choose optimized data structure and code User may re-collect benchmark data with user’s sparse matrix (under development) Non-trivial tuning cost: up to ~40 mat-vecs Library limits the time it spends tuning based on estimated workload provided by user or inferred by library User may reduce cost by saving tuning results for application on future runs with same or similar matrix (under development) The library build & installation process automatically benchmarks the target machine to determine how fast different kinds of data structures and kernel implementations can potentially run. These variants are pre-determined and generated using specialized code generators. Run-time tuning has a non-trivial cost, and the amount of time the library allows itself for tuning depends on the expected workload (e.g., how frequently matrix-vector multiply will be called). This workload may be specified by the user, or may be inferred by the library which transparently monitors how often different kernels are called (self-profiling).
89
How to Call pOSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); OSKI is designed to allow users to migrate their applications gradually to use OSKI. The code fragment shown above represents a sample C application which already has a CSR matrix implementation (ptr, ind, val) and dense vectors (x, y), initialized in some way. The application computes matrix-vector multiply 500 times in a loop. Indeed, OSKI does not provide a way to construct/assemble a matrix, so a user always has to create her matrix in one of OSKI’s supported base formats (CSR, CSC) first.
90
How to Call pOSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create a default pOSKI thread object */ poski_threadarg_t *poski_thread = poski_InitThread(); /* Step 2: Create pOSKI wrappers around this data */ poski_mat_t A_tunable = poski_CreateMatCSR(ptr, ind, val, nrows, ncols, nnz, SHARE_INPUTMAT, poski_thread, NULL, …); poski_vec_t x_view = poski_CreateVec(x, ncols, UNIT_STRIDE, NULL); poski_vec_t y_view = poski_CreateVec(y, nrows, UNIT_STRIDE, NULL); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); To use OSKI, the user should first provide OSKI with the semantics of her existing data structures, here shown by three calls for the three data structures (the matrix + the two vectors). OSKI returns handles to these objects, and in this example, will “share” these objects with the user. The key point here is that if the user only introduced this change, the application would execute just as it did before: OSKI does not modify the user’s data structures unless the user asks it to via OSKI’s “set non-zero value” routines. The call to “oski_CreateMatCSR” defines the semantics of this matrix, e.g., it’s logical dimensions. The user specifies other semantic information (e.g., 0-based indices vs. 1-based indices, symmetry with half or full storage, Hermitian- ness, lower/upper triangular shape, implicit unit diagonal) where the “...” are shown. The calls to “oski_CreateVecView” creates a wrapper around the user’s data structure. Vectors may have non-unit strides, for example. Moreover, there is an “oski_CreateMultiVecView” operation which will wrap a dense matrix data structure. In that case, the user can specify whether she has used row vs. column major storage and what the stride is. Recursive dense data structures are not supported at this time.
91
How to Call pOSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create a default pOSKI thread object */ poski_threadarg_t *poski_thread = poski_InitThread(); /* Step 2: Create pOSKI wrappers around this data */ poski_mat_t A_tunable = poski_CreateMatCSR(ptr, ind, val, nrows, ncols, nnz, SHARE_INPUTMAT, poski_thread, NULL, …); poski_vec_t x_view = poski_CreateVec(x, ncols, UNIT_STRIDE, NULL); poski_vec_t y_view = poski_CreateVec(y, nrows, UNIT_STRIDE, NULL); /* Step 3: Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); To use OSKI, the user should first provide OSKI with the semantics of her existing data structures, here shown by three calls for the three data structures (the matrix + the two vectors). OSKI returns handles to these objects, and in this example, will “share” these objects with the user. The key point here is that if the user only introduced this change, the application would execute just as it did before: OSKI does not modify the user’s data structures unless the user asks it to via OSKI’s “set non-zero value” routines. The call to “oski_CreateMatCSR” defines the semantics of this matrix, e.g., it’s logical dimensions. The user specifies other semantic information (e.g., 0-based indices vs. 1-based indices, symmetry with half or full storage, Hermitian- ness, lower/upper triangular shape, implicit unit diagonal) where the “...” are shown. The calls to “oski_CreateVecView” creates a wrapper around the user’s data structure. Vectors may have non-unit strides, for example. Moreover, there is an “oski_CreateMultiVecView” operation which will wrap a dense matrix data structure. In that case, the user can specify whether she has used row vs. column major storage and what the stride is. Recursive dense data structures are not supported at this time.
92
How to Call pOSKI: Tune with Explicit Hints
User calls “tune” routine (optional) May provide explicit tuning hints poski_mat_t A_tunable = poski_CreateMatCSR( … ); /* … */ /* Tell pOSKI we will call SpMV 500 times (workload hint) */ poski_TuneHint_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view,500); /* Tell pOSKI we think the matrix has 8x8 blocks (structural hint) */ poski_TuneHint_Structure(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8); /* Ask pOSKI to tune */ poski_TuneMat(A_tunable); for( i = 0; i < 500; i++ ) poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); This slide shows one style of tuning in which the user provides explicit hints. These hints are optional. The call to “oski_TuneMat” marks the point in program execution at which the application might incur the 40 SpMV tuning cost. You can substitute symbolic vectors (defined by library constants) instead of actual vectors (x_view, y_view).
93
How to Call pOSKI: Implicit Tuning
Ask library to infer workload (optional) Library profiles all kernel calls May periodically re-tune What if you don’t have any hints, or don’t want to provide any? It’s OK to call “tune” repeatedly, as done in this example. Each call to “tune”, OSKI will check to see whether there is heavy use of particular kernels, in which case it might decide to tune. In cases when it does not, the call is essentially a no-op. This mode also provides a way to re-tune periodically, though our current implementation does not do that. poski_mat_t A_tunable = poski_CreateMatCSR( … ); /* … */ for( i = 0; i < 500; i++ ) { poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); poski_TuneMat(A_tunable); /* Ask pOSKI to tune */ }
94
How to Call pOSKI: Modify a thread object
Ask library to infer thread hints (optional) Number of threads Threading model (PthreadPool, Pthread, OpenMP) Default: PthreadPool, #threads=#available cores on system What if you don’t have any hints, or don’t want to provide any? It’s OK to call “tune” repeatedly, as done in this example. Each call to “tune”, OSKI will check to see whether there is heavy use of particular kernels, in which case it might decide to tune. In cases when it does not, the call is essentially a no-op. This mode also provides a way to re-tune periodically, though our current implementation does not do that. poski_threadarg_t *poski_thread = poski_InitThread(); /* Ask pOSKI to use 8 threads with OpenMP */ poski_ThreadHints(poski_thread, NULL, OPENMP, 8); poski_mat_t A_tunable = poski_CreateMatCSR( …, poski_thread, … ); poski_MatMult( … );
95
How to Call pOSKI: Modify a partition object
Ask library to infer partition hints (optional) Number of partitions #partition = k×#threads Partitioning model (OneD, SemiOneD, TwoD) Default: OneD, #partitions = #threads What if you don’t have any hints, or don’t want to provide any? It’s OK to call “tune” repeatedly, as done in this example. Each call to “tune”, OSKI will check to see whether there is heavy use of particular kernels, in which case it might decide to tune. In cases when it does not, the call is essentially a no-op. This mode also provides a way to re-tune periodically, though our current implementation does not do that. Matrix: /* Ask pOSKI to partition 16 sub-matrices using SemiOneD */ poski_partitionarg_t *pmat poski_PartitionMatHints(SemiOneD, 16); poski_mat_t A_tunable = poski_CreateMatCSR( …, pmat, … ); Vector: /* Ask pOSKI to partition a vector for SpMV input vector based on A_tunable */ poski_partitionVec_t *pvec = poski_PartitionVecHints(A_tunable, KERNEL_MatMult, OP_NORMAL, INPUTVEC); poski_vec_t x_view = poski_CreateVec( …, pvec);
96
Performance on Intel Sandy Bridge E
Jaketown: 3.3 GHz #Cores: 6 (2 threads per core), L3:15MB pOSKI SpMV (Ax) with double precision float-point MKL Sparse BLAS Level 2: mkl_dcsrmv() 4.8x 3.2x 4.5x 2.9x 4.1x 4.7x What if you don’t have any hints, or don’t want to provide any? It’s OK to call “tune” repeatedly, as done in this example. Each call to “tune”, OSKI will check to see whether there is heavy use of particular kernels, in which case it might decide to tune. In cases when it does not, the call is essentially a no-op. This mode also provides a way to re-tune periodically, though our current implementation does not do that.
97
Is tuning SpMV all we can do?
Iterative methods all depend on it But speedups are limited Just 2 flops per nonzero Communication costs dominate Can we beat this bottleneck? Need to look at next level in stack: What do algorithms that use SpMV do? Can we reorganize them to avoid communication? Only way significant speedups will be possible
98
Tuning Higher Level Algorithms than SpMV
We almost always do many SpMVs, not just one “Krylov Subspace Methods” (KSMs) for Ax=b, Ax = λx Conjugate Gradients, GMRES, Lanczos, … Do a sequence of k SpMVs to get vectors [x1 , … , xk] Find best solution x as linear combination of [x1 , … , xk] Main cost is k SpMVs Since communication usually dominates, can we do better? Goal: make communication cost independent of k Parallel case: O(log P) messages, not O(k log P) - optimal same bandwidth as before Sequential case: O(1) messages and bandwidth, not O(k) - optimal Achievable when matrix partitionable with low surface-to-volume ratio
99
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Example: A tridiagonal, n=32, k=3 Works for any “well-partitioned” A A3·x A2·x C. J. Pfeifer, Data flow and storage allocation for the PDQ-5 program on the Philco-2000, Comm. ACM 6, No. 7 (1963), Referred to in paper by Leiserson/Rao/Toledo/ in 1993 paper on blocking covers A·x x … 32 …
100
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x … … 32
101
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x … … 32
102
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x … … 32
103
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x … 32 …
104
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x … … 32
105
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1 A3·x A2·x A·x x … … 32
106
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1 Step 2 A3·x A2·x A·x x … … 32
107
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1 Step 2 Step 3 A3·x A2·x A·x x … … 32
108
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1 Step 2 Step 3 Step 4 A3·x A2·x A·x x … … 32
109
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 1 Proc 2 Proc 3 Proc 4 A3·x A2·x A·x x … … 32
110
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Each processor communicates once with neighbors Proc 1 Proc 2 Proc 3 Proc 4 A3·x A2·x A·x x … … 32
111
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 1 A3·x A2·x A·x x … … 32
112
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 2 A3·x A2·x A·x x … … 32
113
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 3 A3·x A2·x A·x x … … 32
114
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 4 A3·x A2·x A·x x … … 32
115
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Each processor works on (overlapping) trapezoid Proc 1 Proc 2 Proc 3 Proc 4 A3·x A2·x A·x x … … 32
116
Same idea works for general sparse matrices
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx] Same idea works for general sparse matrices Simple block-row partitioning (hyper)graph partitioning Top-to-bottom processing Traveling Salesman Problem Red needed for k=1 Green also needed for k=2 Blue also needed for k=3
117
Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Replace k iterations of y = Ax with [Ax, A2x, …, Akx] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Entries in overlapping regions (triangles) computed redundantly Proc 1 Proc 2 Proc 3 Proc 4 A3·x A2·x A·x x … … 32
118
Locally Dependent Entries for [x,Ax,…,A8x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication k=8 fold reuse of A
119
Remotely Dependent Entries for [x,Ax,…,A8x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x One message to get data needed to compute remotely dependent entries, not k=8 Minimizes number of messages = latency cost Price: redundant work “surface/volume ratio”
120
Fewer Remotely Dependent Entries for [x,Ax,…,A8x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Reduce redundant work by half
121
Remotely Dependent Entries for [x,Ax, A2x,A3x], 2D Laplacian
122
Remotely Dependent Entries for [x,Ax,A2x,A3x],
A irregular, multiple processors Red needed for k=1 Green also needed for k=2 Blue also needed for k=3
123
Speedups on Intel Clovertown (8 core)
124
Performance Results Measured Multicore (Clovertown) speedups up to 6.4x Measured/Modeled sequential OOC speedup up to 3x Modeled parallel Petascale speedup up to 6.9x Modeled parallel Grid speedup up to 22x Sequential speedup due to bandwidth, works for many problem sizes Parallel speedup due to latency, works for smaller problems on many processors
125
Avoiding Communication in Iterative Linear Algebra
k-steps of typical iterative solver for sparse Ax=b or Ax=λx Does k SpMVs with starting vector Finds “best” solution among all linear combinations of these k+1 vectors Many such “Krylov Subspace Methods” Conjugate Gradients, GMRES, Lanczos, Arnoldi, … Goal: minimize communication in Krylov Subspace Methods Assume matrix “well-partitioned,” with modest surface-to-volume ratio Parallel implementation Conventional: O(k log p) messages, because k calls to SpMV New: O(log p) messages - optimal Serial implementation Conventional: O(k) moves of data from slow to fast memory New: O(1) moves of data – optimal Lots of speed up possible (modeled and measured) Price: some redundant computation Much prior work See Mark Hoemmen’s PhD thesis at bebop.cs.berkeley.edu Prior work CG: [van Rosendale, 83], [Chronopoulos and Gear, 89] GMRES: [Walker, 88], [Joubert and Carey, 92], [Bai et al., 94]
126
Minimizing Communication of GMRES to solve Ax=b
GMRES: find x in span{b,Ab,…,Akb} minimizing || Ax-b ||2 Cost of k steps of standard GMRES vs new GMRES Standard GMRES for i=1 to k w = A · v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H Sequential: #words_moved = O(k·nnz) from SpMV + O(k2·n) from MGS Parallel: #messages = O(k) from SpMV + O(k2 · log p) from MGS Communication-avoiding GMRES W = [ v, Av, A2v, … , Akv ] [Q,R] = TSQR(W) … “Tall Skinny QR” Build H from R, solve LSQ problem Sequential: #words_moved = O(nnz) from SpMV + O(k·n) from TSQR Parallel: #messages = O(1) from computing W + O(log p) from TSQR Oops – W from power method, precision lost!
127
“Monomial” basis [Ax,…,Akx]
fails to converge A different polynomial basis does converge
128
Speed ups of GMRES on 8-core Intel Clovertown Requires co-tuning kernels [MHDY09]
129
President Obama cites Communication-Avoiding Algorithms in the FY 2012 Department of Energy Budget Request to Congress: “New Algorithm Improves Performance and Accuracy on Extreme-Scale Computing Systems. On modern computer architectures, communication between processors takes longer than the performance of a floating point arithmetic operation by a given processor. ASCR researchers have developed a new method, derived from commonly used linear algebra methods, to minimize communications between processors and the memory hierarchy, by reformulating the communication patterns specified within the algorithm. This method has been implemented in the TRILINOS framework, a highly-regarded suite of software, which provides functionality for researchers around the world to solve large scale, complex multi-physics problems.” FY 2010 Congressional Budget, Volume 4, FY2010 Accomplishments, Advanced Scientific Computing Research (ASCR), pages CA-GMRES (Hoemmen, Mohiyuddin, Yelick, JD) “Tall-Skinny” QR (Grigori, Hoemmen, Langou, JD)
130
Tuning space for Krylov Methods
Classifications of sparse operators for avoiding communication Explicit indices or nonzero entries cause most communication, along with vectors Ex: With stencils (all implicit) all communication for vectors Indices Explicit (O(nnz)) Implicit (o(nnz)) Explicit (O(nnz)) CSR and variations Vision, climate, AMR,… Implicit (o(nnz)) Graph Laplacian Stencils Nonzero entries Operations [x, Ax, A2x,…, Akx ] or [x, p1(A)x, p2(A)x, …, pk(A)x ] Number of columns in x [x, Ax, A2x,…, Akx ] and [y, ATy, (AT)2y,…, (AT)ky ], or [y, ATAy, (ATA)2y,…, (ATA)ky ], return all vectors or just last one Cotuning and/or interleaving W = [x, Ax, A2x,…, Akx ] and {TSQR(W) or WTW or … } Ditto, but throw away W Preconditioned versions
131
Possible Class Projects
Come to BEBOP meetings (T 12:30 – 2, 380 Soda) Experiment with SpMV on different architectures Which optimizations are most effective? Try to speed up particular matrices of interest Data mining, “bottom solver” from AMR Explore tuning space [x,Ax,…,Akx] kernel Different matrix representations (last slide) New Krylov subspace methods, preconditioning Experiment with new frameworks (SPF) More details available
132
Extra Slides
133
Extensions Other Krylov methods Preconditioning
Arnoldi, CG, Lanczos, … Preconditioning Solve MAx=Mb where preconditioning matrix M chosen to make system “easier” M approximates A-1 somehow, but cheaply, to accelerate convergence Cheap as long as contributions from “distant” parts of the system can be compressed Sparsity Low rank No implementations yet (class projects!)
134
Design Space for [x,Ax,…,Akx] (1/3)
Mathematical Operation How many vectors to keep All: Krylov Subspace Methods Keep last vector Akx only (Jacobi, Gauss Seidel) Improving conditioning of basis W = [x, p1(A)x, p2(A)x,…,pk(A)x] pi(A) = degree i polynomial chosen to reduce cond(W) Preconditioning (Ay=b MAy=Mb) [x,Ax,MAx,AMAx,MAMAx,…,(MA)kx]
135
Design Space for [x,Ax,…,Akx] (2/3)
Representation of sparse A Zero pattern may be explicit or implicit Nonzero entries may be explicit or implicit Implicit save memory, communication Explicit pattern Implicit pattern Explicit nonzeros General sparse matrix Image segmentation Implicit nonzeros Laplacian(graph) Multigrid (AMR) “Stencil matrix” Ex: tridiag(-1,2,-1) Representation of dense preconditioners M Low rank off-diagonal blocks (semiseparable)
136
Design Space for [x,Ax,…,Akx] (3/3)
Parallel implementation From simple indexing, with redundant flops surface/volume ratio To complicated indexing, with fewer redundant flops Sequential implementation Depends on whether vectors fit in fast memory Reordering rows, columns of A Important in parallel and sequential cases Can be reduced to pair of Traveling Salesmen Problems Plus all the optimizations for one SpMV!
137
Summary Communication-Avoiding Linear Algebra (CALA)
Lots of related work Some going back to 1960’s Reports discuss this comprehensively, not here Our contributions Several new algorithms, improvements on old ones Preconditioning Unifying parallel and sequential approaches to avoiding communication Time for these algorithms has come, because of growing communication costs Why avoid communication just for linear algebra motifs?
138
Optimized Sparse Kernel Interface - OSKI
Provides sparse kernels automatically tuned for user’s matrix & machine BLAS-style functionality: SpMV, Ax & ATy, TrSV Hides complexity of run-time tuning Includes new, faster locality-aware kernels: ATAx, Akx Faster than standard implementations Up to 4x faster matvec, 1.8x trisolve, 4x ATA*x For “advanced” users & solver library writers Available as stand-alone library (OSKI 1.0.1h, 6/07) Available as PETSc extension (OSKI-PETSc .1d, 3/06) Bebop.cs.berkeley.edu/oski Under development: pOSKI for multicore Library interface defines low-level primitives in the style of the Sparse BLAS: sparse matrix-vector multiply, sparse triangular solve. Matrix-vector multiply touches each matrix element only once, whereas our locality-aware kernels can reuse these elements. The BeBOP library includes these kernels: (1) simultaneous computation of A*x, AT*z (2) AT*A*x (3) Ak*x, k is non-negative integer Unlike tuning in the dense case, sparse tuning must occur at run-time since the matrix is unknown until then. Here, the “standard implementation” stores the matrix in compressed sparse row (CSR) format with the kernel coded in C or Fortran compiled with full optimizations. To maximize the impact of our software, we are implementing a new “automatically tuned” matrix type in PETSc. Most PETSc users will be able to use our library with little or no modifications to their source code. The stand-alone version of our library is C and Fortran-callable. “Advanced users” means users willing to program at the level of the BLAS.
139
How the OSKI Tunes (Overview)
Library Install-Time (offline) Application Run-Time 1. Build for Target Arch. 2. Benchmark Workload from program monitoring History Matrix Diagram key: Ovals == actions taken by library Cylinders == data stored by or with the library Solid arrows == control flow Dashed arrows == “data” flow :: At library-installation time :: 1. “Build”: Pre-compile source code. Possible code variants are stored in dynamic libraries (“Code Variants”). 2. “Benchmark”: The installation process measures the speed of the possible code variants and stores the results (“Benchmark Data”) The entire build process uses standard & portable GNU configure. :: At run-time, from within the user’s application :: 1. “Matrix from user”: The user passes her pre-assembled matrix in a standard format like compressed sparse row (CSR) or column (CSC), and the library 2. “Evaluate models”: The library contains a list of “heuristic models.” Each model is actually a procedure that analyzes the matrix, workload, and benchmarking data and chooses a data structure & code it thinks is the best for that matrix and workload. A model is typically specialized to predict tuning parameters for a particular kernel & class of data structures (e.g., predict the block size for register blocked matvec). However, higher-level models (meta-models) that combine several heuristics or predict over several possible data structures and kernels are also possible. In the initial implementation, “Evaluate Models” does the following: * Based on the workload, decide on an allowable amount of time for tuning (a “tuning budget”) * WHILE there is time left for tuning DO - Select and evaluate a model to get best predicted performance & corresponding tuning parameters Generated code variants Benchmark data 1. Evaluate Models Heuristic models 2. Select Data Struct. & Code To user: Matrix handle for kernel calls Extensibility: Advanced users may write & dynamically add “Code variants” and “Heuristic models” to system.
140
How the OSKI Tunes (Overview)
At library build/install-time Pre-generate and compile code variants into dynamic libraries Collect benchmark data Measures and records speed of possible sparse data structure and code variants on target architecture Installation process uses standard, portable GNU AutoTools At run-time Library “tunes” using heuristic models Models analyze user’s matrix & benchmark data to choose optimized data structure and code Non-trivial tuning cost: up to ~40 mat-vecs Library limits the time it spends tuning based on estimated workload provided by user or inferred by library User may reduce cost by saving tuning results for application on future runs with same or similar matrix The library build & installation process automatically benchmarks the target machine to determine how fast different kinds of data structures and kernel implementations can potentially run. These variants are pre-determined and generated using specialized code generators. Run-time tuning has a non-trivial cost, and the amount of time the library allows itself for tuning depends on the expected workload (e.g., how frequently matrix-vector multiply will be called). This workload may be specified by the user, or may be inferred by the library which transparently monitors how often different kernels are called (self-profiling).
141
Optimizations in OSKI, so far
Fully automatic heuristics for Sparse matrix-vector multiply Register-level blocking Register-level blocking + symmetry + multiple vectors Cache-level blocking Sparse triangular solve with register-level blocking and “switch-to-dense” optimization Sparse ATA*x with register-level blocking User may select other optimizations manually Diagonal storage optimizations, reordering, splitting; tiled matrix powers kernel (Ak*x) All available in dynamic libraries Accessible via high-level embedded script language “Plug-in” extensibility Very advanced users may write their own heuristics, create new data structures/code variants and dynamically add them to the system pOSKI under construction Optimizations for multicore – more later The automatic heuristics have been discussed in published papers available on the BeBOP home page. The BeBOP group has studied a number of other optimizations. Code for these optimizations will be compiled and available to the user. (See “BeBOP-Lua” in our design document.)
142
How to Call OSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); OSKI is designed to allow users to migrate their applications gradually to use OSKI. The code fragment shown above represents a sample C application which already has a CSR matrix implementation (ptr, ind, val) and dense vectors (x, y), initialized in some way. The application computes matrix-vector multiply 500 times in a loop. Indeed, OSKI does not provide a way to construct/assemble a matrix, so a user always has to create her matrix in one of OSKI’s supported base formats (CSR, CSC) first.
143
How to Call OSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, b, y ); To use OSKI, the user should first provide OSKI with the semantics of her existing data structures, here shown by three calls for the three data structures (the matrix + the two vectors). OSKI returns handles to these objects, and in this example, will “share” these objects with the user. The key point here is that if the user only introduced this change, the application would execute just as it did before: OSKI does not modify the user’s data structures unless the user asks it to via OSKI’s “set non-zero value” routines. The call to “oski_CreateMatCSR” defines the semantics of this matrix, e.g., it’s logical dimensions. The user specifies other semantic information (e.g., 0-based indices vs. 1-based indices, symmetry with half or full storage, Hermitian- ness, lower/upper triangular shape, implicit unit diagonal) where the “...” are shown. The calls to “oski_CreateVecView” creates a wrapper around the user’s data structure. Vectors may have non-unit strides, for example. Moreover, there is an “oski_CreateMultiVecView” operation which will wrap a dense matrix data structure. In that case, the user can specify whether she has used row vs. column major storage and what the stride is. Recursive dense data structures are not supported at this time.
144
How to Call OSKI: Basic Usage
May gradually migrate existing apps Step 1: “Wrap” existing data structures Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y = ·y + ·A·x, 500 times */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);/* Step 2 */ Of course, if you’re using OSKI at all, you probably want to call OSKI kernels. Here, the user has replaced her matrix-vector multiply routine with the corresponding OSKI call. Notice that OSKI kernel function signatures mimic the BLAS. Recall that the user must explicitly ask OSKI to tune, so in this example, no tuning occurs. OSKI has the semantics of the user’s matrix data structure (ptr, ind, val) and vector data structures (x, y), and so can perform the SpMV accordingly. Question: Do users want other base formats besides CSR, CSC? E.g., JAD for legacy apps written for vector architectures?
145
How to Call OSKI: Tune with Explicit Hints
User calls “tune” routine May provide explicit tuning hints (OPTIONAL) oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ /* Tell OSKI we will call SpMV 500 times (workload hint) */ oski_SetHintMatMult(A_tunable, OP_NORMAL, , x_view, , y_view, 500); /* Tell OSKI we think the matrix has 8x8 blocks (structural hint) */ oski_SetHint(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); This slide shows one style of tuning in which the user provides explicit hints. These hints are optional. The call to “oski_TuneMat” marks the point in program execution at which the application might incur the 40 SpMV tuning cost. You can substitute symbolic vectors (defined by library constants) instead of actual vectors (x_view, y_view).
146
How the User Calls OSKI: Implicit Tuning
Ask library to infer workload Library profiles all kernel calls May periodically re-tune What if you don’t have any hints, or don’t want to provide any? It’s OK to call “tune” repeatedly, as done in this example. Each call to “tune”, OSKI will check to see whether there is heavy use of particular kernels, in which case it might decide to tune. In cases when it does not, the call is essentially a no-op. This mode also provides a way to re-tune periodically, though our current implementation does not do that. oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ for( i = 0; i < 500; i++ ) { oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ }
147
Quick-and-dirty Parallelism: OSKI-PETSc
Extend PETSc’s distributed memory SpMV (MATMPIAIJ) PETSc Each process stores diag (all-local) and off-diag submatrices OSKI-PETSc: Add OSKI wrappers Each submatrix tuned independently p0 p1 p2 p3
148
OSKI-PETSc Proof-of-Concept Results
Matrix 1: Accelerator cavity design (R. SLAC) N ~ 1 M, ~40 M non-zeros 2x2 dense block substructure Symmetric Matrix 2: Linear programming (Italian Railways) Short-and-fat: 4k x 1M, ~11M non-zeros Highly unstructured Big speedup from cache-blocking: no native PETSc format Evaluation machine: Xeon cluster Peak: 4.8 Gflop/s per node Proof-of-concept experiment to show OSKI-PETSc implementation “works” (I.e., doesn’t change scaling behavior you’d expect from your current PETSc code)
149
Accelerator Cavity Matrix
Roughly 90% of non-zeros in the blocks along the diagonal
150
OSKI-PETSc Performance: Accel. Cavity
p=1: 234 Mflop/s p=1: 315 Mflop/s OSKI-PETSc p=1: 480 Mflop/s OSKI-PETSc p=8: 6.2
151
Linear Programming Matrix
First 20,000 columns… …
152
OSKI-PETSc Performance: LP Matrix
Note the single-process improvement from cache blocking: 105 Mflop/s vs. 260 Mflop/s (2.48x speedup) Max perf of any implementation p=8: 315 Mflop/s Poor scaling due to significant off-process communication required.
153
Tuning SpMV for Multicore: Architectural Review
154
Cost of memory access in Multicore
When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) NUMA = Non-Uniform Memory Access NUCA = Non-Uniform Cache Access UCA & UMA architecture: Core DRAM Cache Source: Sam Williams
155
Cost of memory access in Multicore
When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) NUMA = Non-Uniform Memory Access NUCA = Non-Uniform Cache Access UCA & UMA architecture: Core DRAM Cache Source: Sam Williams
156
Cost of Memory Access in Multicore
When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) NUMA = Non-Uniform Memory Access NUCA = Non-Uniform Cache Access NUCA & UMA architecture: Core Cache Memory Controller Hub DRAM Source: Sam Williams
157
Cost of Memory Access in Multicore
When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) NUMA = Non-Uniform Memory Access NUCA = Non-Uniform Cache Access NUCA & NUMA architecture: Core Cache DRAM Source: Sam Williams
158
Cost of Memory Access in Multicore
Proper cache locality to maximize performance Core Cache DRAM Source: Sam Williams
159
Cost of Memory Access in Multicore
Proper DRAM locality to maximize performance Core Cache DRAM Source: Sam Williams
160
Optimizing Communication Complexity of Sparse Solvers
Need to modify high level algorithms to use new kernel Example: GMRES for Ax=b where A= 2D Laplacian x lives on n-by-n mesh Partitioned on p½ -by- p½ processor grid A has “5 point stencil” (Laplacian) (Ax)(i,j) = linear_combination(x(i,j), x(i,j±1), x(i±1,j)) Ex: 18-by-18 mesh on 3-by-3 processor grid
161
Minimizing Communication
What is the cost = (#flops, #words, #mess) of s steps of standard GMRES? GMRES, ver.1: for i=1 to s w = A * v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H n/p½ n/p½ Cost(A * v) = s * (9n2 /p, 4n / p½ , 4 ) Cost(MGS = Modified Gram-Schmidt) = s2/2 * ( 4n2 /p , log p , log p ) Total cost ~ Cost( A * v ) + Cost (MGS) Can we reduce the latency?
162
Minimizing Communication
Cost(GMRES, ver.1) = Cost(A*v) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 4s ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from A*v can you avoid? Almost all GMRES, ver. 2: W = [ v, Av, A2v, … , Asv ] [Q,R] = MGS(W) Build H from R, solve LSQ problem s = 3 Cost(W) = ( ~ same, ~ same , 8 ) Latency cost independent of s – optimal Cost (MGS) unchanged Can we reduce the latency more?
163
Minimizing Communication
Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all Cost(TSQR) = ( ~ same, ~ same , log p ) Latency cost independent of s - optimal GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234
164
Minimizing Communication
Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234 Cost(TSQR) = ( ~ same, ~ same , log p ) Oops
165
Minimizing Communication
Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1 W2 W3 W4 R1 R2 R3 R4 R12 R34 R1234 Cost(TSQR) = ( ~ same, ~ same , log p ) Oops – W from power method, precision lost!
167
Minimizing Communication
Cost(GMRES, ver. 3) = Cost(W) + Cost(TSQR) = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , log p ) Latency cost independent of s, just log p – optimal Oops – W from power method, so precision lost – What to do? Use a different polynomial basis Not Monomial basis W = [v, Av, A2v, …], instead … Newton Basis WN = [v, (A – θ1 I)v , (A – θ2 I)(A – θ1 I)v, …] or Chebyshev Basis WC = [v, T1(v), T2(v), …]
169
Tuning Higher Level Algorithms
So far we have tuned a single sparse matrix kernel y = AT*A*x motivated by higher level algorithm (SVD) What can we do by extending tuning to a higher level? Consider Krylov subspace methods for Ax=b, Ax = lx Conjugate Gradients (CG), GMRES, Lanczos, … Inner loop does y=A*x, dot products, saxpys, scalar ops Inner loop costs at least O(1) messages k iterations cost at least O(k) messages Our goal: show how to do k iterations with O(1) messages Possible payoff – make Krylov subspace methods much faster on machines with slow networks Memory bandwidth improvements too (not discussed) Obstacles: numerical stability, preconditioning, …
170
Krylov Subspace Methods for Solving Ax=b
Compute a basis for a subspace V by doing y = A*x k times Find “best” solution in that Krylov subspace V Given starting vector x1, V spanned by x2 = A*x1, x3 = A*x2 , … , xk = A*xk-1 GMRES finds an orthogonal basis of V by Gram-Schmidt, so it actually does a different set of SpMVs than in last bullet
171
Example: Standard GMRES
r = b - A*x1, b = length(r), v1 = r / b … length(r) = sqrt(S ri2 ) for m= 1 to k do w = A*vm … at least O(1) messages for i = 1 to m do … Gram-Schmidt him = dotproduct(vi , w ) … at least O(1) messages, or log(p) w = w – h im * vi end for hm+1,m = length(w) … at least O(1) messages, or log(p) vm+1 = w / hm+1,m find y minimizing length( Hk * y – be1 ) … small, local problem new x = x1 + Vk * y … Vk = [v1 , v2 , … , vk ] Dot product and length actually need O(log p) messages, so O(k^2 * log p) messages, CG would be O(k * log p) O(k2), or O(k2 log p), messages altogether
172
Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal
Different basis for same Krylov subspace What can Proc 1 compute without communication? Proc 1 Proc 2 (A8x)(1:30): (A2x)(1:30): (Ax)(1:30): x(1:30):
173
Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal
Computing missing entries with 1 communication, redundant work Proc 1 Proc 2 (A8x)(1:30): (A2x)(1:30): (Ax)(1:30): x(1:30):
174
Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal
Saving half the redundant work Proc 1 Proc 2 (A8x)(1:30): (A2x)(1:30): (Ax)(1:30): x(1:30):
175
Example: Computing [Ax,A2x,A3x,…,Akx] for Laplacian
A = 5pt Laplacian in 2D, Communicated point for k=3 shown
176
Latency-Avoiding GMRES (1)
r = b - A*x1, b = length(r), v1 = r / b … O(log p) messages Wk+1 = [v1 , A * v1 , A2 * v1 , … , Ak * v1 ] … O(1) messages [Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages Hk = R(:, 2:k+1) * (R(1:k,1:k))-1 … small, local problem find y minimizing length( Hk * y – be1 ) … small, local problem new x = x1 + Qk * y … local problem O(log p) messages altogether Independent of k
177
Latency-Avoiding GMRES (2)
[Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages Easy, but not so stable way to do it: X(myproc) = Wk+1T(myproc) * Wk+1 (myproc) … local computation Y = sum_reduction(X(myproc)) … O(log p) messages … Y = Wk+1T* Wk+1 R = (cholesky(Y))T … small, local computation Q(myproc) = Wk+1 (myproc) * R-1 … local computation
178
Numerical example (1) Diagonal matrix with n=1000, Aii from 1 down to 10-5 Instability as k grows, after many iterations By using higher precision, the instabilities start later and later.
179
Partial remedy: restarting periodically (every 120 iterations)
Numerical Example (2) Partial remedy: restarting periodically (every 120 iterations) Other remedies: high precision, different basis than [v , A * v , … , Ak * v ]
180
Operation Counts for [Ax,A2x,A3x,…,Akx] on p procs
Problem Per-proc cost Standard Optimized 1D mesh #messages 2k 2 (tridiagonal) #words sent #flops 5kn/p 5kn/p + 5k2 memory (k+4)n/p (k+4)n/p + 8k Assume problem large so k << n/p in 1D case, k << n/p1/3 in 3D case 3D mesh #messages 26k 26 27 pt stencil #words sent 6kn2p-2/ knp-1/3 + O(k) 6kn2p-2/ k2np-1/ O(k3) #flops 53kn3/p 53kn3/p + O(k2n2p-2/3) memory (k+28)n3/p+ 6n2p-2/3 + O(np-1/3) (k+28)n3/p + 168kn2p-2/3 + O(k2np-1/3)
181
Summary and Future Work
Dense LAPACK ScaLAPACK Communication primitives Sparse Kernels, Stencils Higher level algorithms All of the above on new architectures Vector, SMPs, multicore, Cell, … High level support for tuning Specification language Integration into compilers
182
Extra Slides
183
A Sparse Matrix You Encounter Every Day
Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your inbox.) You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.
184
What about the Google Matrix?
Google approach Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix At query-time: return list of pages ordered by rank Matrix: A = aG + (1-a)(1/n)uuT Markov model: Surfer follows link with probability a, jumps to a random page with probability 1-a G is n x n connectivity matrix [n » billions] gij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) u is a vector of all 1 values Steady-state probability xi of landing on page i is solution to x = Ax Approximate x by power method: x = Akx0 In practice, k » 25 The matrix to which Google applies the power method is the sum of the connectivity graph matrix G plus a rank one update. According to Cleve’s corner, alpha ~= 85. Macroscopic connectivity structure: Kumar, et al. “The web as a graph.” PODS Size of G as of May ’02: Cleve’s corner. “The world’s largest sparse matrix computation.” “k = 25”: Taher Stanford (student of Jeff Ullman), private communication: “For 120M pages, [the computation takes] about 5 hours for 25 iterations, no parallelization. The limiting factor is the disk speed for sequentially reading in the 8 GB matrix (it is assumed to be larger than main memory).” [13 Mar ’02]
185
Current Work Public software release
Impact on library designs: Sparse BLAS, Trilinos, PETSc, … Integration in large-scale applications DOE: Accelerator design; plasma physics Geophysical simulation based on Block Lanczos (ATA*X; LBL) Systematic heuristics for data structure selection? Evaluation of emerging architectures Revisiting vector micros Other sparse kernels Matrix triple products, Ak*x Parallelism Sparse benchmarks (with UTK) [Gahvari & Hoemmen] Automatic tuning of MPI collective ops [Nishtala, et al.]
186
Summary of High-Level Themes
“Kernel-centric” optimization Vs. basic block, trace, path optimization, for instance Aggressive use of domain-specific knowledge Performance bounds modeling Evaluating software quality Architectural characterizations and consequences Empirical search Hybrid on-line/run-time models Statistical performance models Exploit information from sampling, measuring
187
Related Work My bibliography: 337 entries so far
Sample area 1: Code generation Generative & generic programming Sparse compilers Domain-specific generators Sample area 2: Empirical search-based tuning Kernel-centric linear algebra, signal processing, sorting, MPI, … Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self-tuning compilers, continuous program optimization
188
Future Directions (A Bag of Flaky Ideas)
Composable code generators and search spaces New application domains PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels rich mathematical structure germane to performance; lots of hardware New tuning environments Parallel, Grid, “whole systems” Statistical models of application performance Statistical learning of concise parametric models from traces for architectural evaluation Compiler/automatic derivation of parametric models
189
Possible Future Work Different Architectures Different Kernels
New FP instruction sets (SSE2) SMP / multicore platforms Vector architectures Different Kernels Higher Level Algorithms Parallel versions of kenels, with optimized communication Block algorithms (eg Lanczos) XBLAS (extra precision) Produce Benchmarks Augment HPCC Benchmark Make it possible to combine optimizations easily for any kernel Related tuning activities (LAPACK & ScaLAPACK)
190
Review of Tuning by Illustration
(Extra Slides)
191
Splitting for Variable Blocks and Diagonals
Decompose A = A1 + A2 + … At Detect “canonical” structures (sampling) Split Tune each Ai Improve performance and save storage New data structures Unaligned block CSR Relax alignment in rows & columns Row-segmented diagonals
192
Example: Variable Block Row (Matrix #12)
over CSR 1.8x over RB
193
Example: Row-Segmented Diagonals
over CSR This slide is animated.
194
Mixed Diagonal and Block Structure
195
Summary Automated block size selection Not fully automated
Empirical modeling and search Register blocking for SpMV, triangular solve, ATA*x Not fully automated Given a matrix, select splittings and transformations Lots of combinatorial problems TSP reordering to create dense blocks (Pinar ’97; Moon, et al. ’04) TO DO: Redo!
196
Extra Slides
197
A Sparse Matrix You Encounter Every Day
Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your inbox.) You probably use a sparse matrix everyday, albeit indirectly. Here, I show a million x million submatrix of the web connectivity graph, constructed from an archive at the Stanford WebBase.
198
Problem Context Sparse kernels abound Historical trends
Models of buildings, cars, bridges, economies, … Google PageRank algorithm Historical trends Sparse matrix-vector multiply (SpMV): 10% of peak 2x faster with “hand-tuning” Tuning becoming more difficult over time Promise of automatic tuning: PHiPAC/ATLAS, FFTW, … Challenges to high-performance Not dense linear algebra! Complex data structures: indirect, irregular memory access Performance depends strongly on run-time inputs Historical trends: we’ll look at the data that supports these claims shortly
199
Key Questions, Ideas, Conclusions
How to tune basic sparse kernels automatically? Empirical modeling and search Up to 4x speedups for SpMV 1.8x for triangular solve 4x for ATA*x; 2x for A2*x 7x for multiple vectors What are the fundamental limits on performance? Kernel-, machine-, and matrix-specific upper bounds Achieve 75% or more for SpMV, limiting low-level tuning Consequences for architecture? General techniques for empirical search-based tuning? Statistical models of performance Reference implementations based on CSR format (to be discussed)
200
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance
201
Compressed Sparse Row (CSR) Storage
A one-slide crash course on a basic sparse matrix data structure (compressed sparse row, or CSR, format) and a basic kernel: matrix-vector multiply. In CSR: store each non-zero value, but also need extra index for each non-zero less than dense, but higher “rate” of storage per non-zero. Matrix-vector multiply (MVM): no re-use of A, just of vectors x and y, taken to be dense. MVM with CSR storage: indirect/irregular accesses to x! So what you’d like to do is: Unroll the k loop but you need to know how many non-zeors per row Hoist y[i] not a problem, absent aliasing Eliminate ind[i] it’s an extra load, but you need to know the non-zero pattern Reuse elements of x but you need to know the non-zero pattern Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]]
202
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance
203
Historical Trends in SpMV Performance
The Data Uniprocessor SpMV performance since 1987 “Untuned” and “Tuned” implementations Cache-based superscalar micros; some vectors LINPACK
204
SpMV Historical Trends: Mflop/s
205
Example: The Difficulty of Tuning
nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem FEM discretization from a structural analysis problem. Source: Matrix “olafu”, provided by NASA via the University of Florida Sparse Matrix Collection
206
Still More Surprises More complicated non-zero structure in general
An enlarged submatrix for “ex11”, from an FEM fluid flow application. Original matrix: n = 16614, nnz = 1.1M Source: UF Sparse Matrix Collection
207
Still More Surprises More complicated non-zero structure in general
Example: 3x3 blocking Logical grid of 3x3 cells Suppose we know we want to use 3x3 blocking. SPARSITY lays a logical 3x3 grid on the matrix…
208
Historical Trends: Mixed News
Observations Good news: Moore’s law like behavior Bad news: “Untuned” is 10% peak or less, worsening Good news: “Tuned” roughly 2x better today, and improving Bad news: Tuning is complex (Not really news: SpMV is not LINPACK) Questions Application: Automatic tuning? Architect: What machines are good for SpMV?
209
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques SpMV [SC’02; IJHPCA ’04b] Sparse triangular solve (SpTS) [ICS/POHLL ’02] ATA*x [ICCS/WoPLA ’03] Upper bounds on performance Statistical models of performance
210
SPARSITY: Framework for Tuning SpMV
SPARSITY: Automatic tuning for SpMV [Im & Yelick ’99] General approach Identify and generate implementation space Search space using empirical models & experiments Prototype library and heuristic for choosing register block size Also: cache-level blocking, multiple vectors What’s new? New block size selection heuristic Within 10% of optimal — replaces previous version Expanded implementation space Variable block splitting, diagonals, combinations New kernels: sparse triangular solve, ATA*x, Ar*x
211
Automatic Register Block Size Selection
Selecting the r x c block size Off-line benchmark: characterize the machine Precompute Mflops(r,c) using dense matrix for each r x c Once per machine/architecture Run-time “search”: characterize the matrix Sample A to estimate Fill(r,c) for each r x c Run-time heuristic model Choose r, c to maximize Mflops(r,c) / Fill(r,c) Run-time costs Up to ~40 SpMVs (empirical worst case) Using this approach, we do choose the right block sizes for the two previous examples.
212
Accuracy of the Tuning Heuristics (1/4)
DGEMV NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)
213
Accuracy of the Tuning Heuristics (2/4)
DGEMV
214
Accuracy of the Tuning Heuristics (3/4)
DGEMV
215
Accuracy of the Tuning Heuristics (4/4)
DGEMV
216
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance SC’02 Statistical models of performance
217
Motivation for Upper Bounds Model
Questions Speedups are good, but what is the speed limit? Independent of instruction scheduling, selection What machines are “good” for SpMV?
218
Upper Bounds on Performance: Blocked SpMV
P = (flops) / (time) Flops = 2 * nnz(A) Lower bound on time: Two main assumptions 1. Count memory ops only (streaming) 2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge min access “latency” ai at Li cache & amem e.g., Saavedra-Barrera and PMaC MAPS benchmarks
219
Example: Bounds on Itanium 2
220
Example: Bounds on Itanium 2
221
Example: Bounds on Itanium 2
222
Fraction of Upper Bound Across Platforms
223
Achieved Performance and Machine Balance
Machine balance [Callahan ’88; McCalpin ’95] Balance = Peak Flop Rate / Bandwidth (flops / double) Ideal balance for mat-vec: £ 2 flops / double For SpMV, even less SpMV ~ streaming 1 / (avg load time to stream 1 array) ~ (bandwidth) “Sustained” balance = peak flops / model bandwidth
225
Where Does the Time Go? Most time assigned to memory
Caches “disappear” when line sizes are equal Strictly increasing line sizes
226
Execution Time Breakdown: Matrix 40
TODO: Update this figure from thesis
227
Speedups with Increasing Line Size
228
Summary: Performance Upper Bounds
What is the best we can do for SpMV? Limits to low-level tuning of blocked implementations Refinements? What machines are good for SpMV? Partial answer: balance characterization Architectural consequences? Example: Strictly increasing line sizes
229
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance FDO ’00; IJHPCA ’04a
230
Statistical Models for Automatic Tuning
Idea 1: Statistical criterion for stopping a search A general search model Generate implementation Measure performance Repeat Stop when probability of being within e of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models Problem: Choose 1 among m implementations at run-time Sample performance off-line, build statistical model
231
Example: Select a Matmul Implementation
232
Example: Support Vector Classification
233
Road Map Sparse matrix-vector multiply (SpMV) in a nutshell
Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance Summary and Future Work
234
Summary of High-Level Themes
“Kernel-centric” optimization Vs. basic block, trace, path optimization, for instance Aggressive use of domain-specific knowledge Performance bounds modeling Evaluating software quality Architectural characterizations and consequences Empirical search Hybrid on-line/run-time models Statistical performance models Exploit information from sampling, measuring
235
Related Work My bibliography: 337 entries so far
Sample area 1: Code generation Generative & generic programming Sparse compilers Domain-specific generators Sample area 2: Empirical search-based tuning Kernel-centric linear algebra, signal processing, sorting, MPI, … Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self-tuning compilers, continuous program optimization
236
Future Directions (A Bag of Flaky Ideas)
Composable code generators and search spaces New application domains PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels rich mathematical structure germane to performance; lots of hardware New tuning environments Parallel, Grid, “whole systems” Statistical models of application performance Statistical learning of concise parametric models from traces for architectural evaluation Compiler/automatic derivation of parametric models
237
Acknowledgements Super-advisors: Jim and Kathy
Undergraduate R.A.s: Attila, Ben, Jen, Jin, Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh See pages xvi—xvii of dissertation.
238
TSP-based Reordering: Before
(Pinar ’97; Moon, et al ‘04)
239
TSP-based Reordering: After
(Pinar ’97; Moon, et al ‘04) Up to 2x speedups over CSR
240
Example: L2 Misses on Itanium 2
Misses measured using PAPI [Browne ’00]
241
Example: Distribution of Blocked Non-Zeros
TO DO: Replace this with ex11 spy plot
242
Register Profile: Itanium 2
1190 Mflop/s 190 Mflop/s
243
Register Profiles: Sun and Intel x86
Ultra 2i - 11% 72 Mflop/s Ultra 3 - 5% 90 Mflop/s Register Profiles: Sun and Intel x86 35 Mflop/s 50 Mflop/s Pentium III - 21% Pentium III-M - 15% 108 Mflop/s 122 Mflop/s 42 Mflop/s 58 Mflop/s
244
Register Profiles: IBM and Intel IA-64
Power3 - 17% 252 Mflop/s Power4 - 16% 820 Mflop/s Register Profiles: IBM and Intel IA-64 122 Mflop/s 459 Mflop/s Itanium 1 - 8% Itanium % 247 Mflop/s 1.2 Gflop/s 107 Mflop/s 190 Mflop/s
245
Accurate and Efficient Adaptive Fill Estimation
Idea: Sample matrix Fraction of matrix to sample: s Î [0,1] Cost ~ O(s * nnz) Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals Idea: Monitor variance Cost of tuning Lower bound: convert matrix in 5 to 40 unblocked SpMVs Heuristic: 1 to 11 SpMVs
246
Sparse/Dense Partitioning for SpTS
Partition L into sparse (L1,L2) and dense LD: Perform SpTS in three steps: Sparsity optimizations for (1)—(2); DTRSV for (3) Tuning parameters: block size, size of dense triangle
247
SpTS Performance: Power3
249
Summary of SpTS and AAT*x Results
SpTS — Similar to SpMV 1.8x speedups; limited benefit from low-level tuning AATx, ATAx Cache interleaving only: up to 1.6x speedups Reg + cache: up to 4x speedups 1.8x speedup over register only Similar heuristic; same accuracy (~ 10% optimal) Further from upper bounds: 60—80% Opportunity for better low-level tuning a la PHiPAC/ATLAS Matrix triple products? Ak*x? Preliminary work
250
Register Blocking: Speedup
251
Register Blocking: Performance
252
Register Blocking: Fraction of Peak
253
Example: Confidence Interval Estimation
254
Costs of Tuning
255
Splitting + UBCSR: Pentium III
256
Splitting + UBCSR: Power4
257
Splitting+UBCSR Storage: Power4
259
Example: Variable Block Row (Matrix #13)
260
Dense Tuning is Hard, Too
Even dense matrix multiply can be notoriously difficult to tune
261
2-D cross section of 3-D space.
Dense matrix multiply: surprising performance as register tile size varies.
263
Preliminary Results (Matrix Set 2): Itanium 2
Max speedups are several times faster than on earlier architectures (c.f., 200 Mflop/s on Itanium 1-800, 150 Mflop/s on Power 3, 300 Mflop/s on Pentium GHz). Categories: Matrix 0-2: dense, dense lower triangle, dense upper triangle Matrix 3-18: FEM (uniform blocking) Matrix 19-38: FEM (variable blocking Matrix 39-42: Protein modeling Matrix 43: Quantum chromodynamics calculation Matrix 44-46: Chemistry and chemical engineering Matrix 47-49: Circuits Matrix 50-57: Macroeconomic modeling Matrix 58-66: Statistical modeling Matrix 67-68: Finite-field problems Matrix 69-77: Linear programming Matrix 78-81: Information retrieval Web/IR Dense FEM FEM (var) Bio Econ Stat LP
264
Multiple Vector Performance
On a dense matrix in sparse format, with multiple vectors we can get close to peak machine speed (up to 3 Gflop/s, peak on this machine is 3.6 Gflop/s).
265
“40: webdoc” is the LSI matrix
“40: webdoc” is the LSI matrix. (Figure taken from IJHPCA paper, to appear) “41—42” are linear programming matrices. 43 is also linear programming (Scheduling problem from Italian Railways) NOTE: Matrix numbers (40—44) not consistent with previous slide.
266
What about the Google Matrix?
Google approach Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix At query-time: return list of pages ordered by rank Matrix: A = aG + (1-a)(1/n)uuT Markov model: Surfer follows link with probability a, jumps to a random page with probability 1-a G is n x n connectivity matrix [n » 3 billion] gij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) u is a vector of all 1 values Steady-state probability xi of landing on page i is solution to x = Ax Approximate x by power method: x = Akx0 In practice, k » 25 The matrix to which Google applies the power method is the sum of the connectivity graph matrix G plus a rank one update. According to Cleve’s corner, alpha ~= 85. Macroscopic connectivity structure: Kumar, et al. “The web as a graph.” PODS Size of G as of May ’02: Cleve’s corner. “The world’s largest sparse matrix computation.” “k = 25”: Taher Stanford (student of Jeff Ullman), private communication: “For 120M pages, [the computation takes] about 5 hours for 25 iterations, no parallelization. The limiting factor is the disk speed for sequentially reading in the 8 GB matrix (it is assumed to be larger than main memory).” [13 Mar ’02]
267
The blue bars compare the effect of just reordering the matrix—no significant improvements.
The red bars compare the effect of combined reordering and register blocking. Each red bar is labeled by the block size chosen, and by the fill (in parentheses). We are currently looking at splitting the matrix to reduce this fill overhead while sustaining the performance improvement. In terms of storage cost (MB for double precision values and 32-bit integers): Size(natural, RCM, MMD, no blocking) ~= 35.5 MB Size(natural, 2x1) ~= 44.3 MB, or ~1.25x Size(RCM, 3x1) ~= 54.9 MB, or ~1.55x Size(MMD, 4x1) ~= 66.3 MB, or ~1.87x Let k = no. of non-zeros, f be the fill, and r x c be the block size. Also suppose sizeof(floating point data type) = sizeof(integer data type). Then the size of CSR storage is sizeof(A, ‘csr’) = 2k (ignoring row pointers) sizeof(A, ‘rxc’) = kf * (1+1/(rc)) So we expect a storage savings even with fill when sizeof(A, ‘rxc’) < sizeof(A, ‘csr’): kf * (1 + 1/(rc)) < 2k f < 2 / (1 + (1/rc)) As rc infinity, this condition becomes f < 2. (This case corresponds to the floating point values and integers being both 32-bit or both 64-bit, for example.) If sizeof(real) = 2*sizeof(int), then this condition becomes f < 1.5 as rc infinity. (This case corresponds to storing the values in double precision, but storing the indices in 32-bit integers.) If sizeof(real) = 0.5*sizeof(int), then this condition becomes f < 3 as rc infinity. (This case corresponds to storing the values in single precision, but the indices in 64-bit integers.)
268
MAPS Benchmark Example: Power4
269
MAPS Benchmark Example: Itanium 2
270
Saavedra-Barrera Example: Ultra 2i
271
I think we can mention the bounds without talking about them in detail
I think we can mention the bounds without talking about them in detail. The main point is that this picture is a guide to what performance we get without blocking (reference) and what we might expect (bounds). Note that absolute performance is highly dependent on matrix structure.
272
Summary of Results: Pentium III
Plot of the best block size (exhaustive search). Points: It helps for some matrices and not others, but not blocking is part of the implementation space so we never do worse. We’re close to the bounds. We should emphasize that the Intel compiler is generating the code here, so kudos to the Intel compiler group.
273
Summary of Results: Pentium III (3/3)
And our heuristic always chooses a good block size (includes 1x1) in practice. Similar results for other platforms in the “Extra Slides” section.
274
Execution Time Breakdown (PAPI): Matrix 40
275
Preliminary Results (Matrix Set 1): Itanium 2
Dense FEM FEM (var) Assorted LP
276
Tuning Sparse Triangular Solve (SpTS)
Compute x=L-1*b where L sparse lower triangular, x & b dense L from sparse LU has rich dense substructure Dense trailing triangle can account for 20—90% of matrix non-zeros SpTS optimizations Split into sparse trapezoid and dense trailing triangle Use tuned dense BLAS (DTRSV) on dense triangle Use Sparsity register blocking on sparse part Tuning parameters Size of dense trailing triangle Register block size
277
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
278
Cache Blocked SpMV on LSI Matrix: Ultra 2i
10k x 255k 3.7M non-zeros Baseline: 16 Mflop/s Best block size & performance: 16k x 64k 28 Mflop/s Best block size on this platform: Optimal size is machine dependent.
279
Cache Blocking on LSI Matrix: Pentium 4
10k x 255k 3.7M non-zeros Baseline: 44 Mflop/s Best block size & performance: 16k x 16k 210 Mflop/s LSI matrix has nearly randomly distributed non-zeros.
280
Cache Blocked SpMV on LSI Matrix: Itanium
10k x 255k 3.7M non-zeros Baseline: 25 Mflop/s Best block size & performance: 16k x 32k 72 Mflop/s
281
Cache Blocked SpMV on LSI Matrix: Itanium 2
10k x 255k 3.7M non-zeros Baseline: 170 Mflop/s Best block size & performance: 16k x 65k 275 Mflop/s
282
Inter-Iteration Sparse Tiling (1/3)
x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij x2 t2 y2 This generalization of Strout algorithm for an arbitrary Akx stolen from my quals talk. x3 t3 y3 x4 t4 y4 x5 t5 y5
283
Inter-Iteration Sparse Tiling (2/3)
x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij Orange = everything needed to compute y1 Reuse a11, a12 x2 t2 y2 x3 t3 y3 x4 t4 y4 x5 t5 y5
284
Inter-Iteration Sparse Tiling (3/3)
x1 t1 y1 [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A2x t=Ax, y=At Nodes: vector elements Edges: matrix elements aij Orange = everything needed to compute y1 Reuse a11, a12 Grey = y2, y3 Reuse a23, a33, a43 x2 t2 y2 Depending on cache placement policies, could reuse even more grey edges since they were read on previous iteration. x3 t3 y3 x4 t4 y4 x5 t5 y5
285
Inter-Iteration Sparse Tiling: Issues
x1 t1 y1 Tile sizes (colored regions) grow with no. of iterations and increasing out-degree G likely to have a few nodes with high out-degree (e.g., Yahoo) Mathematical tricks to limit tile size? Judicious dropping of edges [Ng’01] x2 t2 y2 Idea of dropping edges follows from the following paper. Andrew Ng, Alice Zheng, Michael I. Jordan. “Link Analysis, Eigenvectors, and Stability.” IJCAI, 2001. x3 t3 y3 x4 t4 y4 x5 t5 y5
286
Summary and Questions Need to understand matrix structure and machine
BeBOP: suite of techniques to deal with different sparse structures and architectures Google matrix problem Established techniques within an iteration Ideas for inter-iteration optimizations Mathematical structure of problem may help Questions Structure of G? What are the computational bottlenecks? Enabling future computations? E.g., topic-sensitive PageRank multiple vector version [Haveliwala ’02] See for more info, including more complete Itanium 2 results.
287
Exploiting Matrix Structure
Symmetry (numerical or structural) Reuse matrix entries Can combine with register blocking, multiple vectors, … Matrix splitting Split the matrix, e.g., into r x c and 1 x 1 No fill overhead Large matrices with random structure E.g., Latent Semantic Indexing (LSI) matrices Technique: cache blocking Store matrix as 2i x 2j sparse submatrices Effective when x vector is large Currently, search to find fastest size Symmetry could be numerical or structural. In matrix splitting, we try to overcome the issues with too much fill but splitting into “true” r x c blocks (i.e., without fill) and the remainder in conventional (CSR) storage.
288
Symmetric SpMV Performance: Pentium 4
289
SpMV with Split Matrices: Ultra 2i
290
Cache Blocking on Random Matrices: Itanium
Speedup on four banded random matrices.
291
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
292
Register Blocked SpMV: Pentium III
293
Register Blocked SpMV: Ultra 2i
294
Register Blocked SpMV: Power3
295
Register Blocked SpMV: Itanium
296
Possible Optimization Techniques
Within an iteration, i.e., computing (G+uuT)*x once Cache block G*x On linear programming matrices and matrices with random structure (e.g., LSI), 1.5—4x speedups Best block size is matrix and machine dependent Reordering and/or splitting of G to separate dense structure (rows, columns, blocks) Between iterations, e.g., (G+uuT)2x (G+uuT)2x = G2x + (Gu)uTx + u(uTG)x + u(uTu)uTx Compute Gu, uTG, uTu once for all iterations G2x: Inter-iteration tiling to read G only once We have established techniques for efficiently computing within an iteration. What about computing across iterations?
297
Multiple Vector Performance: Itanium
298
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
299
SpTS Performance: Itanium
(See POHLL ’02 workshop paper, at ICS ’02.)
300
Sparse Kernels and Optimizations
Sparse matrix-vector multiply (SpMV): y=A*x Sparse triangular solve (SpTS): x=T-1*b y=AAT*x, y=ATA*x Powers (y=Ak*x), sparse triple-product (R*A*RT), … Optimization techniques (implementation space) Register blocking Cache blocking Multiple dense vectors (x) A has special structure (e.g., symmetric, banded, …) Hybrid data structures (e.g., splitting, switch-to-dense, …) Matrix reordering How and when do we search? Off-line: Benchmark implementations Run-time: Estimate matrix properties, evaluate performance models based on benchmark data
301
Optimizing AAT*x Kernel: y=AAT*x, where A is sparse, x & y dense
Arises in linear programming, computation of SVD Conventional implementation: compute z=AT*x, y=A*z Elements of A can be reused: When ak represent blocks of columns, can apply register blocking.
302
Optimized AAT*x Performance: Pentium III
303
Current Directions Applying new optimizations
Other split data structures (variable block, diagonal, …) Matrix reordering to create block structure Structural symmetry New kernels (triple product RART, powers Ak, …) Tuning parameter selection Building an automatically tuned sparse matrix library Extending the Sparse BLAS Leverage existing sparse compilers as code generation infrastructure More thoughts on this topic tomorrow
304
Related Work Automatic performance tuning systems Code generation
PHiPAC [Bilmes, et al., ’97], ATLAS [Whaley & Dongarra ’98] FFTW [Frigo & Johnson ’98], SPIRAL [Pueschel, et al., ’00], UHFFT [Mirkovic and Johnsson ’00] MPI collective operations [Vadhiyar & Dongarra ’01] Code generation FLAME [Gunnels & van de Geijn, ’01] Sparse compilers: [Bik ’99], Bernoulli [Pingali, et al., ’97] Generic programming: Blitz++ [Veldhuizen ’98], MTL [Siek & Lumsdaine ’98], GMCL [Czarnecki, et al. ’98], … Sparse performance modeling [Temam & Jalby ’92], [White & Saddayappan ’97], [Navarro, et al., ’96], [Heras, et al., ’99], [Fraguela, et al., ’99], …
305
More Related Work Compiler analysis, models Sparse BLAS interfaces
CROPS [Carter, Ferrante, et al.]; Serial sparse tiling [Strout ’01] TUNE [Chatterjee, et al.] Iterative compilation [O’Boyle, et al., ’98] Broadway compiler [Guyer & Lin, ’99] [Brewer ’95], ADAPT [Voss ’00] Sparse BLAS interfaces BLAST Forum (Chapter 3) NIST Sparse BLAS [Remington & Pozo ’94]; SparseLib++ SPARSKIT [Saad ’94] Parallel Sparse BLAS [Fillipone, et al. ’96]
306
Context: Creating High-Performance Libraries
Application performance dominated by a few computational kernels Today: Kernels hand-tuned by vendor or user Performance tuning challenges Performance is a complicated function of kernel, architecture, compiler, and workload Tedious and time-consuming Successful automated approaches Dense linear algebra: ATLAS/PHiPAC Signal processing: FFTW/SPIRAL/UHFFT
307
Cache Blocked SpMV on LSI Matrix: Itanium
308
Sustainable Memory Bandwidth
309
Multiple Vector Performance: Pentium 4
310
Multiple Vector Performance: Itanium
311
Multiple Vector Performance: Pentium 4
312
Optimized AAT*x Performance: Ultra 2i
313
Optimized AAT*x Performance: Pentium 4
314
Tuning Pays Off—PHiPAC
315
Tuning pays off – ATLAS Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)
316
Register Tile Sizes (Dense Matrix Multiply)
333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color-coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest A 2-D slice of the 3-D core matmul space, with all other generator options fixed. This shows the performance (Mflop/s) of various core matmul implementations on a small in-L2 cache workload. The “best” is the dark red square at (m0=2,k0=1,n0=8), which achieved 620 Mflop/s. This experiment was performed on the Sun Ultra-10 workstation (333 MHz Ultra II processor, 2 MB L2 cache) using the Sun cc compiler v5.0 with the flags -dalign -xtarget=native -xO5 -xarch=v8plusa. The space is discrete and highly irregular. (This is not even the worst example!)
317
High Precision GEMV (XBLAS)
318
High Precision Algorithms (XBLAS)
Double-double (High precision word represented as pair of doubles) Many variations on these algorithms; we currently use Bailey’s Exploiting Extra-wide Registers Suppose s(1) , … , s(n) have f-bit fractions, SUM has F>f bit fraction Consider following algorithm for S = Si=1,n s(i) Sort so that |s(1)| |s(2)| … |s(n)| SUM = 0, for i = 1 to n SUM = SUM + s(i), end for, sum = SUM Theorem (D., Hida) Suppose F<2f (less than double precision) If n 2F-f + 1, then error 1.5 ulps If n = 2F-f + 2, then error 22f-F ulps (can be 1) If n 2F-f + 3, then error can be arbitrary (S 0 but sum = 0 ) Examples s(i) double (f=53), SUM double extended (F=64) accurate if n = 2049 Dot product of single precision x(i) and y(i) s(i) = x(i)*y(i) (f=2*24=48), SUM double extended (F=64) accurate if n = 65537
319
More Extra Slides
320
Opteron (1.4GHz, 2.8GFlop peak)
Nonsymmetric peak = 504 MFlops Symmetric peak = 612 MFlops Beat ATLAS DGEMV’s 365 Mflops
321
Awards Best Paper, Intern. Conf. Parallel Processing, 2004
“Performance models for evaluation and automatic performance tuning of symmetric sparse matrix-vector multiply” Best Student Paper, Intern. Conf. Supercomputing, Workshop on Performance Optimization via High-Level Languages and Libraries, 2003 Best Student Presentation too, to Richard Vuduc “Automatic performance tuning and analysis of sparse triangular solve” Finalist, Best Student Paper, Supercomputing 2002 To Richard Vuduc “Performance Optimization and Bounds for Sparse Matrix-vector Multiply” Best Presentation Prize, MICRO-33: 3rd ACM Workshop on Feedback-Directed Dynamic Optimization, 2000 “Statistical Modeling of Feedback Data in an Automatic Tuning System”
322
Accuracy of the Tuning Heuristics (4/4)
323
Can Match DGEMV Performance
324
Fraction of Upper Bound Across Platforms
325
Achieved Performance and Machine Balance
Machine balance [Callahan ’88; McCalpin ’95] Balance = Peak Flop Rate / Bandwidth (flops / double) Lower is better (I.e. can hope to get higher fraction of peak flop rate) Ideal balance for mat-vec: £ 2 flops / double For SpMV, even less
327
Where Does the Time Go? Most time assigned to memory
Caches “disappear” when line sizes are equal Strictly increasing line sizes
328
Execution Time Breakdown: Matrix 40
329
Execution Time Breakdown (PAPI): Matrix 40
330
Speedups with Increasing Line Size
331
Summary: Performance Upper Bounds
What is the best we can do for SpMV? Limits to low-level tuning of blocked implementations Refinements? What machines are good for SpMV? Partial answer: balance characterization Architectural consequences? Help to have strictly increasing line sizes
332
Evaluating algorithms and machines for SpMV
Metrics Speedups Mflop/s (“fair” flops) Fraction of peak Questions Speedups are good, but what is “the best?” Independent of instruction scheduling, selection Can SpMV be further improved or not? What machines are “good” for SpMV?
333
Tuning Dense BLAS —PHiPAC
334
Tuning Dense BLAS– ATLAS
Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)
335
Fast algorithms are rare.
Consider Intel Pentium 4/1500: First circle: only 10% of codes run at >= 63% of peak Second circle: only 1% of codes run at >= 73% of peak Third circle: only .2% of codes run at >= 90% of peak
336
Statistical Models for Automatic Tuning
Idea 1: Statistical criterion for stopping a search A general search model Generate implementation Measure performance Repeat Stop when probability of being within e of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models Problem: Choose 1 among m implementations at run-time Sample performance off-line, build statistical model
337
SpMV Historical Trends: Fraction of Peak
338
Motivation for Automatic Performance Tuning of SpMV
Historical trends Sparse matrix-vector multiply (SpMV): 10% of peak or less Performance depends on machine, kernel, matrix Matrix known at run-time Best data structure + implementation can be surprising Our approach: empirical performance modeling and algorithm search Historical trends: we’ll look at the data that supports these claims shortly
339
Accuracy of the Tuning Heuristics (3/4)
340
Accuracy of the Tuning Heuristics (3/4)
DGEMV
341
Sparse CALA Summary Optimizing one SpMV too limiting
Take k steps of Krylov subspace method GMRES, CG, Lanczos, Arnoldi Assume matrix “well-partitioned,” with modest surface-to-volume ratio Parallel implementation Conventional: O(k log p) messages CALA: O(log p) messages - optimal Serial implementation Conventional: O(k) moves of data from slow to fast memory CALA: O(1) moves of data – optimal Can incorporate some preconditioners Need to be able to “compress” interactions between distant i, j Hierarchical, semiseparable matrices … Lots of speed up possible (measured and modeled) Price: some redundant computation
342
Potential Impact on Applications: T3P
Application: accelerator design [Ko] 80% of time spent in SpMV Relevant optimization techniques Symmetric storage Register blocking On Single Processor Itanium 2 1.68x speedup 532 Mflops, or 15% of 3.6 GFlop peak 4.4x speedup with multiple (8) vectors 1380 Mflops, or 38% of peak
343
Locally Dependent Entries for [x,Ax], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
344
Locally Dependent Entries for [x,Ax,A2x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
345
Locally Dependent Entries for [x,Ax,…,A3x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
346
Locally Dependent Entries for [x,Ax,…,A4x], A tridiagonal
2 processors Proc Proc 2 x Ax A2x A3x A4x A5x A6x A7x A8x Can be computed without communication
347
Sequential [x,Ax,…,A4x], with memory hierarchy
One read of matrix from slow memory, not k=4 Minimizes words moved = bandwidth cost No redundant work
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.