Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr14.

Similar presentations


Presentation on theme: "CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr14."— Presentation transcript:

1 CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr14

2 Outline Motivation for Automatic Performance Tuning Results for sparse matrix kernels OSKI = Optimized Sparse Kernel Interface –pOSKI for multicore Tuning Higher Level Algorithms Future Work, Class Projects BeBOP: Berkeley Benchmarking and Optimization Group –Many results shown from current and former members –Meet weekly W 12-1:30, in 380 Soda

3 Motivation for Automatic Performance Tuning Writing high performance software is hard –Make programming easier while getting high speed Ideal: program in your favorite high level language (Matlab, Python, PETSc…) and get a high fraction of peak performance Reality: Best algorithm (and its implementation) can depend strongly on the problem, computer architecture, compiler,… –Best choice can depend on knowing a lot of applied mathematics and computer science How much of this can we teach? How much of this can we automate?

4 Examples of Automatic Performance Tuning (1) Dense BLAS –Sequential –PHiPAC (UCB), then ATLAS (UTK) (used in Matlab) –math-atlas.sourceforge.net/ –Internal vendor tools Fast Fourier Transform (FFT) & variations –Sequential and Parallel –FFTW (MIT) –www.fftw.org Digital Signal Processing –SPIRAL: www.spiral.net (CMU) Communication Collectives (UCB, UTK) Rose (LLNL), Bernoulli (Cornell), Telescoping Languages (Rice), … More projects, conferences, government reports, …

5 Examples of Automatic Performance Tuning (2) What do dense BLAS, FFTs, signal processing, MPI reductions have in common? –Can do the tuning off-line: once per architecture, algorithm –Can take as much time as necessary (hours, a week…) –At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc.

6 Tuning Register Tile Sizes (Dense Matrix Multiply) 333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color- coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest Needle in a haystack

7 Example: Select a Matmul Implementation

8 Example: Support Vector Classification

9 Machine Learning in Automatic Performance Tuning References –Statistical Models for Empirical Search-Based Performance Tuning (International Journal of High Performance Computing Applications, 18 (1), pp. 65-94, February 2004) Richard Vuduc, J. Demmel, and Jeff A. Bilmes. –Predicting and Optimizing System Utilization and Performance via Statistical Machine Learning (Computer Science PhD Thesis, University of California, Berkeley. UCB//EECS-2009-181 ) Archana Ganapathi

10 Machine Learning in Automatic Performance Tuning More references –Machine Learning for Predictive Autotuning with Boosted Regression Trees, (Innovative Parallel Computing, 2012) J. Bergstra et al. –Practical Bayesian Optimization of Machine Learning Algorithms, (NIPS 2012) J. Snoek et al –OpenTuner: An Extensible Framework for Program Autotuning, (dspace.mit.edu/handle/1721.1/81958) S. Amarasinghe et al

11 Examples of Automatic Performance Tuning (3) What do dense BLAS, FFTs, signal processing, MPI reductions have in common? –Can do the tuning off-line: once per architecture, algorithm –Can take as much time as necessary (hours, a week…) –At run-time, algorithm choice may depend only on few parameters Matrix dimension, size of FFT, etc. Can’t always do off-line tuning –Algorithm and implementation may strongly depend on data only known at run-time –Ex: Sparse matrix nonzero pattern determines both best data structure and implementation of Sparse-matrix-vector-multiplication (SpMV) –Part of search for best algorithm just be done (very quickly!) at run-time

12 Source: Accelerator Cavity Design Problem (Ko via Husbands)

13 Linear Programming Matrix …

14 A Sparse Matrix You Encounter Every Day

15 Matrix-vector multiply kernel: y (i)  y (i) + A (i,j) *x (j) for each row i for k=ptr[i] to ptr[i+1]-1 do y[i] = y[i] + val[k]*x[ind[k]] SpMV with Compressed Sparse Row (CSR) Storage Matrix-vector multiply kernel: y (i)  y (i) + A (i,j) *x (j) for each row i for k=ptr[i] to ptr[i+1]-1 do y[i] = y[i] + val[k]*x[ind[k]]

16 Example: The Difficulty of Tuning n = 21200 nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem

17 Example: The Difficulty of Tuning n = 21200 nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem 8x8 dense substructure

18 Taking advantage of block structure in SpMV Bottleneck is time to get matrix from memory –Only 2 flops for each nonzero in matrix Don’t store each nonzero with index, instead store each nonzero r-by-c block with index –Storage drops by up to 2x, if rc >> 1, all 32-bit quantities –Time to fetch matrix from memory decreases Change both data structure and algorithm –Need to pick r and c –Need to change algorithm accordingly In example, is r=c=8 best choice? –Minimizes storage, so looks like a good idea…

19 Speedups on Itanium 2: The Need for Search Reference Best: 4x2 Mflop/s

20 Register Profile: Itanium 2 190 Mflop/s 1190 Mflop/s

21 SpMV Performance (Matrix #2): Generation 1 Power3 - 13%Power4 - 14% Itanium 2 - 31%Itanium 1 - 7% 195 Mflop/s 100 Mflop/s 703 Mflop/s 469 Mflop/s 225 Mflop/s 103 Mflop/s 1.1 Gflop/s 276 Mflop/s

22 Register Profiles: IBM and Intel IA-64 Power3 - 17%Power4 - 16% Itanium 2 - 33%Itanium 1 - 8% 252 Mflop/s 122 Mflop/s 820 Mflop/s 459 Mflop/s 247 Mflop/s 107 Mflop/s 1.2 Gflop/s 190 Mflop/s

23 SpMV Performance (Matrix #2): Generation 2 Ultra 2i - 9%Ultra 3 - 5% Pentium III-M - 15%Pentium III - 19% 63 Mflop/s 35 Mflop/s 109 Mflop/s 53 Mflop/s 96 Mflop/s 42 Mflop/s 120 Mflop/s 58 Mflop/s

24 Register Profiles: Sun and Intel x86 Ultra 2i - 11%Ultra 3 - 5% Pentium III-M - 15%Pentium III - 21% 72 Mflop/s 35 Mflop/s 90 Mflop/s 50 Mflop/s 108 Mflop/s 42 Mflop/s 122 Mflop/s 58 Mflop/s

25 Another example of tuning challenges More complicated non-zero structure in general N = 16614 NNZ = 1.1M

26 Zoom in to top corner More complicated non-zero structure in general N = 16614 NNZ = 1.1M

27 3x3 blocks look natural, but… More complicated non-zero structure in general Example: 3x3 blocking –Logical grid of 3x3 cells But would lead to lots of “fill-in”

28 Extra Work Can Improve Efficiency! More complicated non-zero structure in general Example: 3x3 blocking –Logical grid of 3x3 cells –Fill-in explicit zeros –Unroll 3x3 block multiplies –“Fill ratio” = 1.5 On Pentium III: 1.5x speedup! –Actual mflop rate 1.5 2 = 2.25 higher

29 Automatic Register Block Size Selection Selecting the r x c block size –Off-line benchmark Precompute Mflops(r,c) using dense A for each r x c Once per machine/architecture –Run-time “search” Sample A to estimate Fill(r,c) for each r x c –Run-time heuristic model Choose r, c to minimize time ~ Fill(r,c) / Mflops(r,c)

30 Accurate and Efficient Adaptive Fill Estimation Idea: Sample matrix –Fraction of matrix to sample: s  [0,1] –Cost ~ O(s * nnz) –Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals –Idea: Monitor variance Cost of tuning –Lower bound: convert matrix in 5 to 40 unblocked SpMVs –Heuristic: 1 to 11 SpMVs

31 Accuracy of the Tuning Heuristics (1/4) NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”) See p. 375 of Vuduc’s thesis for matrices

32 Accuracy of the Tuning Heuristics (2/4)

33 DGEMV

34 Upper Bounds on Performance for blocked SpMV P = (flops) / (time) –Flops = 2 * nnz(A) Lower bound on time: Two main assumptions –1. Count memory ops only (streaming) –2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge minimum access “latency”  i at L i cache &  mem –e.g., Saavedra-Barrera and PMaC MAPS benchmarks

35 Example: L2 Misses on Itanium 2 Misses measured using PAPI [Browne ’00]

36 Example: Bounds on Itanium 2

37

38

39 Summary of Other Sequential Performance Optimizations Optimizations for SpMV –Register blocking (RB): up to 4x over CSR –Variable block splitting: 2.1x over CSR, 1.8x over RB –Diagonals: 2x over CSR –Reordering to create dense structure + splitting: 2x over CSR –Symmetry: 2.8x over CSR, 2.6x over RB –Cache blocking: 2.8x over CSR –Multiple vectors (SpMM): 7x over CSR –And combinations… Sparse triangular solve –Hybrid sparse/dense data structure: 1.8x over CSR Higher-level kernels –A·A T ·x, A T ·A·x: 4x over CSR, 1.8x over RB –A  ·x: 2x over CSR, 1.5x over RB –[A·x, A  ·x, A  ·x,.., A k ·x]

40 Example: Sparse Triangular Factor Raefsky4 (structural problem) + SuperLU + colmmd N=19779, nnz=12.6 M Dense trailing triangle: dim=2268, 20% of total nz Can be as high as 90+%! 1.8x over CSR

41 Cache Optimizations for AA T *x Cache-level: Interleave multiplication by A, A T –Only fetch A from memory once Register-level: a i T to be r  c block row, or diag row dot product “axpy” … …

42 Example: Combining Optimizations (1/2) Register blocking, symmetry, multiple (k) vectors –Three low-level tuning parameters: r, c, v v k X Y A c r += *

43 Example: Combining Optimizations (2/2) Register blocking, symmetry, and multiple vectors [Ben Lee @ UCB] –Symmetric, blocked, 1 vector Up to 2.6x over nonsymmetric, blocked, 1 vector –Symmetric, blocked, k vectors Up to 2.1x over nonsymmetric, blocked, k vectors Up to 7.3x over nonsymmetric, nonblocked, 1 vector –Symmetric Storage: up to 64.7% savings

44 Why so much about SpMV? Contents of the “Sparse Motif” What is “sparse linear algebra”? Direct solvers for Ax=b, least squares –Sparse Gaussian elimination, QR for least squares –How to choose: crd.lbl.gov/~xiaoye/SuperLU/SparseDirectSurvey.pdf Iterative solvers for Ax=b, least squares, Ax=λx, SVD –Used when SpMV only affordable operation on A – Krylov Subspace Methods –How to choose For Ax=b: www.netlib.org/templates/Templates.html For Ax=λx: www.cs.ucdavis.edu/~bai/ET/contents.html What about Multigrid? –In overlap of sparse and (un)structured grid motifs – details later

45 How to choose an iterative solver - example All methods (GMRES, CGS,CG…) depend on SpMV (or variations…) See www.netlib.org/templates/Templates.html for details

46 Motif/Dwarf: Common Computational Methods (Red Hot  Blue Cool) 03/08/2012CS267 Lecture 15

47 Potential Impact on Applications: Omega3P Application: accelerator cavity design [Ko] Relevant optimization techniques –Symmetric storage –Register blocking –Reordering, to create more dense blocks Reverse Cuthill-McKee ordering to reduce bandwidth –Do Breadth-First-Search, number nodes in reverse order visited Traveling Salesman Problem-based ordering to create blocks –Nodes = columns of A –Weights(u, v) = no. of nonzeros u, v have in common –Tour = ordering of columns –Choose maximum weight tour –See [Pinar & Heath ’97] 2.1x speedup on Power 4

48 Source: Accelerator Cavity Design Problem (Ko via Husbands)

49 Post-RCM Reordering

50 100x100 Submatrix Along Diagonal

51 Before: Green + Red After: Green + Blue “Microscopic” Effect of RCM Reordering

52 “Microscopic” Effect of Combined RCM+TSP Reordering Before: Green + Red After: Green + Blue

53 (Omega3P)

54 How do permutations affect algorithms? A = original matrix, A P = A with permuted rows, columns Naïve approach: permute x, multiply y=A P x, permute y Faster way to solve Ax=b –Write A P = P T AP where P is a permutation matrix –Solve A P x P = P T b for x P, using SpMV with A P, then let x = Px P –Only need to permute vectors twice, not twice per iteration Faster way to solve Ax=λx –A and A P have same eigenvalues, no vectors to permute! –A P x P =λx P implies Ax = λx where x = Px P Where else do optimizations change higher level algorithms? More later…

55 55 Tuning SpMV on Multicore

56 56 Multicore SMPs Used AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls) Source: Sam Williams

57 57 Multicore SMPs Used (Conventional cache-based memory hierarchy) AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) Sun T2+ T5140 (Victoria Falls)IBM QS20 Cell Blade Source: Sam Williams

58 58 Multicore SMPs Used (Local store-based memory hierarchy) IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls) AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) Source: Sam Williams

59 59 Multicore SMPs Used (CMT = Chip-MultiThreading) AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls) Source: Sam Williams

60 60 Multicore SMPs Used (threads) AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls) 8 threads 16 * threads 128 threads * SPEs only Source: Sam Williams

61 61 Multicore SMPs Used (Non-Uniform Memory Access - NUMA) AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls) * SPEs only Source: Sam Williams

62 62 Multicore SMPs Used (peak double precision flops) AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls) 75 GFlop/s74 Gflop/s 29* GFlop/s 19 GFlop/s * SPEs only Source: Sam Williams

63 63 Multicore SMPs Used (Total DRAM bandwidth) AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown) IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls) 21 GB/s (read) 10 GB/s (write) 21 GB/s 51 GB/s 42 GB/s (read) 21 GB/s (write) * SPEs only Source: Sam Williams

64 64 Results from “Auto-tuning Sparse Matrix-Vector Multiplication (SpMV)” Samuel Williams, Leonid Oliker, Richard Vuduc, John Shalf, Katherine Yelick, James Demmel, "Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms", Supercomputing (SC), 2007.

65 65 Test matrices Suite of 14 matrices All bigger than the caches of our SMPs We’ll also include a median performance number Dense Protein FEM / Spheres FEM / Cantilever Wind Tunnel FEM / Harbor QCD FEM / Ship EconomicsEpidemiology FEM / Accelerator Circuitwebbase LP 2K x 2K Dense matrix stored in sparse format Well Structured (sorted by nonzeros/row) Poorly Structured hodgepodge Extreme Aspect Ratio (linear programming) Source: Sam Williams

66 SpMV Parallelization How do we parallelize a matrix-vector multiplication ? 66 Source: Sam Williams

67 SpMV Parallelization How do we parallelize a matrix-vector multiplication ? We could parallelize by columns (sparse matrix time dense sub vector) and in the worst case simplify the random access challenge but: –each thread would need to store a temporary partial sum –and we would need to perform a reduction (inter-thread data dependency) 67 thread 0thread 1thread 2thread 3 Source: Sam Williams

68 SpMV Parallelization How do we parallelize a matrix-vector multiplication ? We could parallelize by columns (sparse matrix time dense sub vector) and in the worst case simplify the random access challenge but: –each thread would need to store a temporary partial sum –and we would need to perform a reduction (inter-thread data dependency) 68 thread 0thread 1thread 2thread 3 Source: Sam Williams

69 SpMV Parallelization How do we parallelize a matrix-vector multiplication ? By rows blocks No inter-thread data dependencies, but random access to x 69 thread 0 thread 1 thread 2 thread 3 Source: Sam Williams

70 70 SpMV Performance (simple parallelization) Out-of-the box SpMV performance on a suite of 14 matrices Simplest solution = parallelization by rows Scalability isn’t great Can we do better? Naïve Pthreads Naïve Source: Sam Williams

71 Summary of Multicore Optimizations NUMA - Non-Uniform Memory Access –pin submatrices to memories close to cores assigned to them Prefetch – values, indices, and/or vectors –use exhaustive search on prefetch distance Matrix Compression – not just register blocking (BCSR) –32 or 16-bit indices, Block Coordinate format for submatrices Cache-blocking –2D partition of matrix, so needed parts of x,y fit in cache 71

72 NUMA (Data Locality for Matrices) On NUMA architectures, all large arrays should be partitioned either –explicitly (multiple malloc()’s + affinity) –implicitly (parallelize initialization and rely on first touch) You cannot partition on granularities less than the page size –512 elements on x86 –2M elements on Niagara For SpMV, partition the matrix and perform multiple malloc()’s Pin submatrices so they are co-located with the cores tasked to process them 72 Source: Sam Williams

73 NUMA (Data Locality for Matrices) 73 Source: Sam Williams

74 Prefetch for SpMV SW prefetch injects more MLP into the memory subsystem. Supplement HW prefetchers Can try to prefetch the –values –indices –source vector –or any combination thereof In general, should only insert one prefetch per cache line (works best on unrolled code) 74 for(all rows){ y0 = 0.0; y1 = 0.0; y2 = 0.0; y3 = 0.0; for(all tiles in this row){ PREFETCH(V+i+PFDistance); y0+=V[i ]*X[C[i]] y1+=V[i+1]*X[C[i]] y2+=V[i+2]*X[C[i]] y3+=V[i+3]*X[C[i]] } y[r+0] = y0; y[r+1] = y1; y[r+2] = y2; y[r+3] = y3; } Source: Sam Williams

75 SpMV Performance (NUMA and Software Prefetching) 75  NUMA-aware allocation is essential on memory- bound NUMA SMPs.  Explicit software prefetching can boost bandwidth and change cache replacement policies  Cell PPEs are likely latency-limited.  used exhaustive search for best prefetch distance Source: Sam Williams

76 Matrix Compression Goal: minimize memory traffic Register blocking –Choose block size to minimize memory traffic –Only power-of-2 block sizes –Simplifies search, achieves most of the possible speedup Shorter indices –32-bit, or 16-bit if possible Different sparse matrix formats –BCSR – Block compressed sparse row Like CSR but with register blocks –BCOO – Block coordinate Stores row and column index of each register block Better on very sparse sub-blocks (see cache blocking later)

77 ILP/DLP vs Bandwidth In the multicore era, which is the bigger issue? –a lack of ILP/DLP (a major advantage of BCSR) –insufficient memory bandwidth per core There are many architectures that when running low arithmetic intensity kernels, there is so little available memory bandwidth per core that you won’t notice a complete lack of ILP Perhaps we should concentrate on minimizing memory traffic rather than maximizing ILP/DLP Rather than benchmarking every combination, just Select the register blocking that minimizes the matrix foot print. 77 Source: Sam Williams

78 Matrix Compression Strategies Register blocking creates small dense tiles –better ILP/DLP –reduced overhead per nonzero Let each thread select a unique register blocking In this work, –we only considered power-of-two register blocks –select the register blocking that minimizes memory traffic 78 Source: Sam Williams

79 Matrix Compression Strategies Where possible we may encode indices with less than 32 bits We may also select different matrix formats In this work, –we considered 16-bit and 32-bit indices (relative to thread’s start) –we explored BCSR/BCOO (GCSR in book chapter) 79 Source: Sam Williams

80 SpMV Performance (Matrix Compression) 80  After maximizing memory bandwidth, the only hope is to minimize memory traffic.  Compression: exploit  register blocking  other formats  smaller indices  Use a traffic minimization heuristic rather than search  Benefit is clearly matrix-dependent.  Register blocking enables efficient software prefetching (one per cache line) Source: Sam Williams

81 Cache blocking for SpMV (Data Locality for Vectors) Store entire submatrices contiguously The columns spanned by each cache block are selected to use same space in cache, i.e. access same number of x(i) TLB blocking is a similar concept but instead of on 8 byte granularities, it uses 4KB granularities 81 thread 0 thread 1 thread 2 thread 3 Source: Sam Williams

82 Cache blocking for SpMV (Data Locality for Vectors) Cache-blocking sparse matrices is very different than cache-blocking dense matrices. Rather than changing loop bounds, store entire submatrices contiguously. The columns spanned by each cache block are selected so that all submatrices place the same pressure on the cache i.e. touch the same number of unique source vector cache lines TLB blocking is a similar concept but instead of on 64 byte granularities, it uses 4KB granularities 82 thread 0 thread 1 thread 2 thread 3 Source: Sam Williams Store entire submatrices contiguously The columns spanned by each cache block are selected to use same space in cache, i.e. access same number of x(i) TLB blocking is a similar concept but instead of on 8 byte granularities, it uses 4KB granularities

83 83 Auto-tuned SpMV Performance (cache and TLB blocking) Fully auto-tuned SpMV performance across the suite of matrices Why do some optimizations work better on some architectures? matrices with naturally small working sets architectures with giant caches +Cache/LS/TLB Blocking +Matrix Compression +SW Prefetching +NUMA/Affinity Naïve Pthreads Naïve Source: Sam Williams

84 84 Auto-tuned SpMV Performance (architecture specific optimizations) Fully auto-tuned SpMV performance across the suite of matrices Included SPE/local store optimized version Why do some optimizations work better on some architectures? +Cache/LS/TLB Blocking +Matrix Compression +SW Prefetching +NUMA/Affinity Naïve Pthreads Naïve Source: Sam Williams

85 85 Auto-tuned SpMV Performance (max speedup) Fully auto-tuned SpMV performance across the suite of matrices Included SPE/local store optimized version Why do some optimizations work better on some architectures? +Cache/LS/TLB Blocking +Matrix Compression +SW Prefetching +NUMA/Affinity Naïve Pthreads Naïve 2.7x 4.0x 2.9x 35x Source: Sam Williams

86 Optimized Sparse Kernel Interface - pOSKI bebop.cs.berkeley.edu/poski Provides sparse kernels automatically tuned for user’s matrix & machine –BLAS-style functionality: SpMV, Ax & A T y –Hides complexity of run-time tuning Based on OSKI – bebop.cs.berkeley.edu/oski –Autotuner for sequential sparse matrix operations: SpMV (Ax and A T x), A T Ax, solve sparse triangular systems, … –So far pOSKI only does multicore optimizations of SpMV Class projects! –Up to 4.5x faster SpMV (Ax) on Intel Sandy Bridge E On-going work by the Berkeley Benchmarking and Optimization (BeBop) group

87 Optimizations in pOSKI, so far Fully automatic heuristics for –Sparse matrix-vector multiply (Ax, A T x) Register-level blocking, Thread-level blocking SIMD, software prefetching, software pipelining, loop unrolling NUMA-aware allocations “Plug-in” extensibility –Very advanced users may write their own heuristics, create new data structures/code variants and dynamically add them to the system, using embedded scripting language Lua Other optimizations under development –Cache-level blocking, Reordering (RCM, TSP), variable block structure, index compressing, Symmetric storage, etc.

88 How the pOSKI Tunes (Overview) 1. Build for Target Arch. 2. Benchmark Generated Code Variants Library Install-Time (offline) Application Run-Time Sample Dense Matrix (r,c) (r,c) = Register Block size (d) = prefetching distance (imp) = SIMD implementation (r,c,d,imp,…) Benchmark Data & Selected Code Variants ….. 2. Evaluate Models 3. Select Data Struct. & Code 2. Evaluate Models 3. Select Data Struct. & Code User’s Matrix 1. Partition Workload from program monitoring Empirical & Heuristic Search History User’s hints Submatrix thread Submatrix …. To user: Matrix handle for kernel calls

89 How the pOSKI Tunes (Overview) At library build/install-time –Generate code variants Code generator (Python) generates code variants for various implementations –Collect benchmark data Measures and records speed of possible sparse data structure and code variants on target architecture –Select best code variants & benchmark data prefetching distance, SIMD implementation –Installation process uses standard, portable GNU AutoTools At run-time –Library “tunes” using heuristic models Models analyze user’s matrix & benchmark data to choose optimized data structure and code User may re-collect benchmark data with user’s sparse matrix (under development) –Non-trivial tuning cost: up to ~40 mat-vecs Library limits the time it spends tuning based on estimated workload –provided by user or inferred by library User may reduce cost by saving tuning results for application on future runs with same or similar matrix (under development)

90 How to Call pOSKI: Basic Usage May gradually migrate existing apps –Step 1: “Wrap” existing data structures –Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Compute y =  ·y +  ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, , y );

91 How to Call pOSKI: Basic Usage May gradually migrate existing apps –Step 1: “Wrap” existing data structures –Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create a default pOSKI thread object */ poski_threadarg_t *poski_thread = poski_InitThread(); /* Step 2: Create pOSKI wrappers around this data */ poski_mat_t A_tunable = poski_CreateMatCSR(ptr, ind, val, nrows, ncols, nnz, SHARE_INPUTMAT, poski_thread, NULL, …); poski_vec_t x_view = poski_CreateVecView(x, ncols, UNIT_STRIDE, NULL); poski_vec_t y_view = poski_CreateVecView(y, nrows, UNIT_STRIDE, NULL); /* Compute y =  ·y +  ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, , y );

92 How to Call pOSKI: Basic Usage May gradually migrate existing apps –Step 1: “Wrap” existing data structures –Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ / * Step 1: Create a default pOSKI thread object */ poski_threadarg_t *poski_thread = poski_InitThread(); /* Step 2: Create pOSKI wrappers around this data */ poski_mat_t A_tunable = poski_CreateMatCSR(ptr, ind, val, nrows, ncols, nnz, SHARE_INPUTMAT, poski_thread, NULL, …); poski_vec_t x_view = poski_CreateVecView(x, ncols, UNIT_STRIDE, NULL); poski_vec_t y_view = poski_CreateVecView(y, nrows, UNIT_STRIDE, NULL); /* Step 3: Compute y =  ·y +  ·A·x, 500 times */ for( i = 0; i < 500; i++ ) poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

93 How to Call pOSKI: Tune with Explicit Hints User calls “tune” routine (optional) –May provide explicit tuning hints poski_mat_t A_tunable = poski_CreateMatCSR( … ); /* … */ /* Tell pOSKI we will call SpMV 500 times (workload hint) */ poski_TuneHint_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view,500); /* Tell pOSKI we think the matrix has 8x8 blocks (structural hint) */ poski_TuneHint_Structure(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8); /* Ask pOSKI to tune */ poski_TuneMat(A_tunable); for( i = 0; i < 500; i++ ) poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

94 How to Call pOSKI: Implicit Tuning Ask library to infer workload (optional) –Library profiles all kernel calls –May periodically re-tune poski_mat_t A_tunable = poski_CreateMatCSR( … ); /* … */ for( i = 0; i < 500; i++ ) { poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); poski_TuneMat(A_tunable); /* Ask pOSKI to tune */ }

95 How to Call pOSKI: Modify a thread object Ask library to infer thread hints (optional) –Number of threads –Threading model (PthreadPool, Pthread, OpenMP) Default: PthreadPool, #threads=#available cores on system poski_threadarg_t *poski_thread = poski_InitThread(); /* Ask pOSKI to use 8 threads with OpenMP */ poski_ThreadHints(poski_thread, NULL, OPENMP, 8); poski_mat_t A_tunable = poski_CreateMatCSR( …, poski_thread, … ); poski_MatMult( … );

96 How to Call pOSKI: Modify a partition object Ask library to infer partition hints (optional) –Number of partitions #partition = k×#threads –Partitioning model (OneD, SemiOneD, TwoD) Default: OneD, #partitions = #threads Matrix: /* Ask pOSKI to partition 16 sub-matrices using SemiOneD */ poski_partitionarg_t *pmat poski_PartitionMatHints(SemiOneD, 16); poski_mat_t A_tunable = poski_CreateMatCSR( …, pmat, … ); Vector: /* Ask pOSKI to partition a vector for SpMV input vector based on A_tunable */ poski_partitionVec_t *pvec = poski_PartitionVecHints(A_tunable, KERNEL_MatMult, OP_NORMAL, INPUTVEC); poski_vec_t x_view = poski_CreateVec( …, pvec);

97 Performance on Intel Sandy Bridge E 4.8x 3.2x 4.5x 2.9x 4.1x 4.5x 4.7x Jaketown: i7-3960X @ 3.3 GHz #Cores: 6 (2 threads per core), L3:15MB pOSKI SpMV (Ax) with double precision float-point MKL Sparse BLAS Level 2: mkl_dcsrmv()

98 Is tuning SpMV all we can do? Iterative methods all depend on it But speedups are limited –Just 2 flops per nonzero –Communication costs dominate Can we beat this bottleneck? Need to look at next level in stack: –What do algorithms that use SpMV do? –Can we reorganize them to avoid communication? Only way significant speedups will be possible

99 Tuning Higher Level Algorithms than SpMV We almost always do many SpMVs, not just one –“Krylov Subspace Methods” (KSMs) for Ax=b, Ax = λx Conjugate Gradients, GMRES, Lanczos, … –Do a sequence of k SpMVs to get vectors [x 1, …, x k ] –Find best solution x as linear combination of [x 1, …, x k ] Main cost is k SpMVs Since communication usually dominates, can we do better? Goal: make communication cost independent of k –Parallel case: O(log P) messages, not O(k log P) - optimal same bandwidth as before –Sequential case: O(1) messages and bandwidth, not O(k) - optimal Achievable when matrix partitionable with low surface-to-volume ratio

100 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3 Works for any “well-partitioned” A

101 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

102 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

103 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

104 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

105 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Example: A tridiagonal, n=32, k=3

106 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1

107 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1Step 2

108 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1Step 2Step 3

109 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Sequential Algorithm Example: A tridiagonal, n=32, k=3 Step 1Step 2Step 3Step 4

110 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 1Proc 2Proc 3Proc 4

111 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Each processor communicates once with neighbors Proc 1Proc 2Proc 3Proc 4

112 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 1

113 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 2

114 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 3

115 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Proc 4

116 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Each processor works on (overlapping) trapezoid Proc 1Proc 2Proc 3Proc 4

117 Same idea works for general sparse matrices Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Simple block-row partitioning  (hyper)graph partitioning Top-to-bottom processing   Traveling Salesman Problem

118 1 2 3 4 … … 32 x A·x A 2 ·x A 3 ·x Communication Avoiding Kernels: The Matrix Powers Kernel : [Ax, A 2 x, …, A k x] Replace k iterations of y = A  x with [Ax, A 2 x, …, A k x] Parallel Algorithm Example: A tridiagonal, n=32, k=3 Entries in overlapping regions (triangles) computed redundantly Proc 1Proc 2Proc 3Proc 4

119 x Ax A2xA2x A3xA3x A4xA4x A5xA5x A6xA6x A7xA7x A8xA8x Locally Dependent Entries for [x,Ax,…,A 8 x], A tridiagonal 2 processors Can be computed without communication k=8 fold reuse of A Proc 1 Proc 2

120 x Ax A2xA2x A3xA3x A4xA4x A5xA5x A6xA6x A7xA7x A8xA8x One message to get data needed to compute remotely dependent entries, not k=8 Minimizes number of messages = latency cost Price: redundant work  “surface/volume ratio” Proc 1 Proc 2 Remotely Dependent Entries for [x,Ax,…,A 8 x], A tridiagonal 2 processors

121 Fewer Remotely Dependent Entries for [x,Ax,…,A 8 x], A tridiagonal 2 processors x Ax A2xA2x A3xA3x A4xA4x A5xA5x A6xA6x A7xA7x A8xA8x Reduce redundant work by half Proc 1 Proc 2

122 Remotely Dependent Entries for [x,Ax, A 2 x,A 3 x], 2D Laplacian

123 Remotely Dependent Entries for [x,Ax,A 2 x,A 3 x], A irregular, multiple processors

124 Speedups on Intel Clovertown (8 core)

125 Performance Results Measured Multicore (Clovertown) speedups up to 6.4x Measured/Modeled sequential OOC speedup up to 3x Modeled parallel Petascale speedup up to 6.9x Modeled parallel Grid speedup up to 22x Sequential speedup due to bandwidth, works for many problem sizes Parallel speedup due to latency, works for smaller problems on many processors Multicore results used both techniques

126 Avoiding Communication in Iterative Linear Algebra k-steps of typical iterative solver for sparse Ax=b or Ax=λx –Does k SpMVs with starting vector –Finds “best” solution among all linear combinations of these k+1 vectors –Many such “Krylov Subspace Methods” Conjugate Gradients, GMRES, Lanczos, Arnoldi, … Goal: minimize communication in Krylov Subspace Methods –Assume matrix “well-partitioned,” with modest surface-to-volume ratio –Parallel implementation Conventional: O(k log p) messages, because k calls to SpMV New: O(log p) messages - optimal –Serial implementation Conventional: O(k) moves of data from slow to fast memory New: O(1) moves of data – optimal Lots of speed up possible (modeled and measured) –Price: some redundant computation Much prior work –See Mark Hoemmen’s PhD thesis, other papers at bebop.cs.berkeley.edu

127 Minimizing Communication of GMRES to solve Ax=b GMRES: find x in span{b,Ab,…,A k b} minimizing || Ax-b || 2 Cost of k steps of standard GMRES vs new GMRES Standard GMRES for i=1 to k w = A · v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H Sequential: #words_moved = O(k·nnz) from SpMV + O(k 2 ·n) from MGS Parallel: #messages = O(k) from SpMV + O(k 2 · log p) from MGS Communication-avoiding GMRES W = [ v, Av, A 2 v, …, A k v ] [Q,R] = TSQR(W) … “Tall Skinny QR” Build H from R, solve LSQ problem Sequential: #words_moved = O(nnz) from SpMV + O(k·n) from TSQR Parallel: #messages = O(1) from computing W + O(log p) from TSQR Oops – W from power method, precision lost!

128 “Monomial” basis [Ax,…,A k x] fails to converge A different polynomial basis does converge: [p 1 (A)x,…,p k (A)x]

129 Speed ups of GMRES on 8-core Intel Clovertown Requires co-tuning kernels [MHDY09]

130 130 CA-BiCGStab

131 “New Algorithm Improves Performance and Accuracy on Extreme-Scale Computing Systems. On modern computer architectures, communication between processors takes longer than the performance of a floating point arithmetic operation by a given processor. ASCR researchers have developed a new method, derived from commonly used linear algebra methods, to minimize communications between processors and the memory hierarchy, by reformulating the communication patterns specified within the algorithm. This method has been implemented in the TRILINOS framework, a highly-regarded suite of software, which provides functionality for researchers around the world to solve large scale, complex multi-physics problems.” FY 2010 Congressional Budget, Volume 4, FY2010 Accomplishments, Advanced Scientific Computing Research (ASCR), pages 65-67. President Obama cites Communication-Avoiding Algorithms in the FY 2012 Department of Energy Budget Request to Congress: CA-GMRES (Hoemmen, Mohiyuddin, Yelick, JD) “Tall-Skinny” QR (Grigori, Hoemmen, Langou, JD)

132 Tuning space for Krylov Methods Explicit (O(nnz))Implicit (o(nnz)) Explicit (O(nnz))CSR and variationsVision, climate, AMR,… Implicit (o(nnz))Graph LaplacianStencils Nonzero entries Indices Many different algorithms (GMRES, BiCGStab, CG, Lanczos,…), polynomials, preconditioning Classifications of sparse operators for avoiding communication Explicit indices or nonzero entries cause most communication, along with vectors Ex: With stencils (all implicit) all communication for vectors Operations [x, Ax, A 2 x,…, A k x ] or [x, p 1 (A)x, p 2 (A)x, …, p k (A)x ] Number of columns in x [x, Ax, A 2 x,…, A k x ] and [y, A T y, (A T ) 2 y,…, (A T ) k y ], or [y, A T Ay, (A T A) 2 y,…, (A T A) k y ], return all vectors or just last one Cotuning and/or interleaving W = [x, Ax, A 2 x,…, A k x ] and {TSQR(W) or W T W or … } Ditto, but throw away W

133 Possible Class Projects Come to BEBOP meetings (W 12 – 1:30, 380 Soda) Experiment with SpMV on different architectures –Which optimizations are most effective? Try to speed up particular matrices of interest –Data mining, “bottom solver” from AMR Explore tuning space of [x,Ax,…,A k x] kernel –Different matrix representations (last slide) –New Krylov subspace methods, preconditioning Experiment with new frameworks (SPF, Halide) More details available

134 Extra Slides

135 Extensions Other Krylov methods –Arnoldi, CG, Lanczos, … Preconditioning –Solve MAx=Mb where preconditioning matrix M chosen to make system “easier” M approximates A -1 somehow, but cheaply, to accelerate convergence –Cheap as long as contributions from “distant” parts of the system can be compressed Sparsity Low rank –No implementations yet (class projects!)

136 Design Space for [x,Ax,…,A k x] (1/3) Mathematical Operation –How many vectors to keep All: Krylov Subspace Methods Keep last vector A k x only (Jacobi, Gauss Seidel) –Improving conditioning of basis W = [x, p 1 (A)x, p 2 (A)x,…,p k (A)x] p i (A) = degree i polynomial chosen to reduce cond(W) –Preconditioning (Ay=b  MAy=Mb) [x,Ax,MAx,AMAx,MAMAx,…,(MA) k x]

137 Design Space for [x,Ax,…,A k x] (2/3) Representation of sparse A –Zero pattern may be explicit or implicit –Nonzero entries may be explicit or implicit Implicit  save memory, communication Explicit patternImplicit pattern Explicit nonzerosGeneral sparse matrixImage segmentation Implicit nonzerosLaplacian(graph) Multigrid (AMR) “Stencil matrix” Ex: tridiag(-1,2,-1) Representation of dense preconditioners M – Low rank off-diagonal blocks (semiseparable)

138 Design Space for [x,Ax,…,A k x] (3/3) Parallel implementation –From simple indexing, with redundant flops  surface/volume ratio –To complicated indexing, with fewer redundant flops Sequential implementation –Depends on whether vectors fit in fast memory Reordering rows, columns of A –Important in parallel and sequential cases –Can be reduced to pair of Traveling Salesmen Problems Plus all the optimizations for one SpMV!

139 Summary Communication-Avoiding Linear Algebra (CALA) Lots of related work –Some going back to 1960’s –Reports discuss this comprehensively, not here Our contributions –Several new algorithms, improvements on old ones Preconditioning –Unifying parallel and sequential approaches to avoiding communication –Time for these algorithms has come, because of growing communication costs –Why avoid communication just for linear algebra motifs?

140 Optimized Sparse Kernel Interface - OSKI Provides sparse kernels automatically tuned for user’s matrix & machine –BLAS-style functionality: SpMV, Ax & A T y, TrSV –Hides complexity of run-time tuning –Includes new, faster locality-aware kernels: A T Ax, A k x Faster than standard implementations –Up to 4x faster matvec, 1.8x trisolve, 4x A T A*x For “advanced” users & solver library writers –Available as stand-alone library (OSKI 1.0.1h, 6/07) –Available as PETSc extension (OSKI-PETSc.1d, 3/06) –Bebop.cs.berkeley.edu/oski Under development: pOSKI for multicore

141 How the OSKI Tunes (Overview) Benchmark data 1. Build for Target Arch. 2. Benchmark Heuristic models 1. Evaluate Models Generated code variants 2. Select Data Struct. & Code Library Install-Time (offline) Application Run-Time To user: Matrix handle for kernel calls Workload from program monitoring Extensibility: Advanced users may write & dynamically add “Code variants” and “Heuristic models” to system. History Matrix

142 How the OSKI Tunes (Overview) At library build/install-time –Pre-generate and compile code variants into dynamic libraries –Collect benchmark data Measures and records speed of possible sparse data structure and code variants on target architecture –Installation process uses standard, portable GNU AutoTools At run-time –Library “tunes” using heuristic models Models analyze user’s matrix & benchmark data to choose optimized data structure and code –Non-trivial tuning cost: up to ~40 mat-vecs Library limits the time it spends tuning based on estimated workload –provided by user or inferred by library User may reduce cost by saving tuning results for application on future runs with same or similar matrix

143 Optimizations in OSKI, so far Fully automatic heuristics for –Sparse matrix-vector multiply Register-level blocking Register-level blocking + symmetry + multiple vectors Cache-level blocking –Sparse triangular solve with register-level blocking and “switch-to- dense” optimization –Sparse A T A*x with register-level blocking User may select other optimizations manually –Diagonal storage optimizations, reordering, splitting; tiled matrix powers kernel ( A k *x ) –All available in dynamic libraries –Accessible via high-level embedded script language “Plug-in” extensibility –Very advanced users may write their own heuristics, create new data structures/code variants and dynamically add them to the system pOSKI under construction –Optimizations for multicore – more later

144 How to Call OSKI: Basic Usage May gradually migrate existing apps –Step 1: “Wrap” existing data structures –Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Compute y =  ·y +  ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, , y );

145 How to Call OSKI: Basic Usage May gradually migrate existing apps –Step 1: “Wrap” existing data structures –Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y =  ·y +  ·A·x, 500 times */ for( i = 0; i < 500; i++ ) my_matmult( ptr, ind, val, , x, , y );

146 How to Call OSKI: Basic Usage May gradually migrate existing apps –Step 1: “Wrap” existing data structures –Step 2: Make BLAS-like kernel calls int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */ double* x = …, *y = …; /* Let x and y be two dense vectors */ /* Step 1: Create OSKI wrappers around this data */ oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows, num_cols, SHARE_INPUTMAT, …); oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE); oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE); /* Compute y =  ·y +  ·A·x, 500 times */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); /* Step 2 */

147 How to Call OSKI: Tune with Explicit Hints User calls “tune” routine –May provide explicit tuning hints (OPTIONAL) oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ /* Tell OSKI we will call SpMV 500 times (workload hint) */ oski_SetHintMatMult(A_tunable, OP_NORMAL, , x_view, , y_view, 500); /* Tell OSKI we think the matrix has 8x8 blocks (structural hint) */ oski_SetHint(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ for( i = 0; i < 500; i++ ) oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

148 How the User Calls OSKI: Implicit Tuning Ask library to infer workload –Library profiles all kernel calls –May periodically re-tune oski_matrix_t A_tunable = oski_CreateMatCSR( … ); /* … */ for( i = 0; i < 500; i++ ) { oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view); oski_TuneMat(A_tunable); /* Ask OSKI to tune */ }

149 Quick-and-dirty Parallelism: OSKI-PETSc Extend PETSc’s distributed memory SpMV (MATMPIAIJ) p0 p1 p2 p3 PETSc –Each process stores diag (all-local) and off-diag submatrices OSKI-PETSc: –Add OSKI wrappers –Each submatrix tuned independently

150 OSKI-PETSc Proof-of-Concept Results Matrix 1: Accelerator cavity design (R. Lee @ SLAC) –N ~ 1 M, ~40 M non-zeros –2x2 dense block substructure –Symmetric Matrix 2: Linear programming (Italian Railways) –Short-and-fat: 4k x 1M, ~11M non-zeros –Highly unstructured –Big speedup from cache-blocking: no native PETSc format Evaluation machine: Xeon cluster –Peak: 4.8 Gflop/s per node

151 Accelerator Cavity Matrix

152 OSKI-PETSc Performance: Accel. Cavity

153 Linear Programming Matrix …

154 OSKI-PETSc Performance: LP Matrix

155 Tuning SpMV for Multicore: Architectural Review

156 Cost of memory access in Multicore When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) –NUMA = Non-Uniform Memory Access –NUCA = Non-Uniform Cache Access UCA & UMA architecture: 156 Core DRAM Cache Core Source: Sam Williams

157 Cost of memory access in Multicore When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) –NUMA = Non-Uniform Memory Access –NUCA = Non-Uniform Cache Access UCA & UMA architecture: 157 Core DRAM Cache Core Source: Sam Williams

158 Cost of Memory Access in Multicore When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) –NUMA = Non-Uniform Memory Access –NUCA = Non-Uniform Cache Access NUCA & UMA architecture: 158 Core Cache Core Cache Memory Controller Hub DRAM Source: Sam Williams

159 Cost of Memory Access in Multicore When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies) –NUMA = Non-Uniform Memory Access –NUCA = Non-Uniform Cache Access NUCA & NUMA architecture: 159 Core Cache Core Cache DRAM Source: Sam Williams

160 Cost of Memory Access in Multicore Proper cache locality to maximize performance 160 Core Cache Core Cache DRAM Source: Sam Williams

161 Cost of Memory Access in Multicore Proper DRAM locality to maximize performance 161 Core Cache Core Cache DRAM Source: Sam Williams

162 Optimizing Communication Complexity of Sparse Solvers Need to modify high level algorithms to use new kernel Example: GMRES for Ax=b where A= 2D Laplacian –x lives on n-by-n mesh –Partitioned on p ½ -by- p ½ processor grid –A has “5 point stencil” (Laplacian) (Ax)(i,j) = linear_combination(x(i,j), x(i,j±1), x(i±1,j)) –Ex: 18-by-18 mesh on 3-by-3 processor grid

163 Minimizing Communication What is the cost = (#flops, #words, #mess) of s steps of standard GMRES? GMRES, ver.1: for i=1 to s w = A * v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H n/p ½ Cost(A * v) = s * (9n 2 /p, 4n / p ½, 4 ) Cost(MGS = Modified Gram-Schmidt) = s 2 /2 * ( 4n 2 /p, log p, log p ) Total cost ~ Cost( A * v ) + Cost (MGS) Can we reduce the latency?

164 Minimizing Communication Cost(GMRES, ver.1) = Cost(A*v) + Cost(MGS) Cost(W) = ( ~ same, ~ same, 8 ) Latency cost independent of s – optimal Cost (MGS) unchanged Can we reduce the latency more? = ( 9sn 2 /p, 4sn / p ½, 4s ) + ( 2s 2 n 2 /p, s 2 log p / 2, s 2 log p / 2 ) How much latency cost from A*v can you avoid? Almost all GMRES, ver. 2: W = [ v, Av, A 2 v, …, A s v ] [Q,R] = MGS(W) Build H from R, solve LSQ problem s = 3

165 Minimizing Communication Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = = ( 9sn 2 /p, 4sn / p ½, 8 ) + ( 2s 2 n 2 /p, s 2 log p / 2, s 2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all Cost(TSQR) = ( ~ same, ~ same, log p ) Latency cost independent of s - optimal GMRES, ver. 3: W = [ v, Av, A 2 v, …, A s v ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W =W = W1W2W3W4W1W2W3W4 R1R2R3R4R1R2R3R4 R 12 R34R34 R 1234

166 Minimizing Communication Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn 2 /p, 4sn / p ½, 8 ) + ( 2s 2 n 2 /p, s 2 log p / 2, s 2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all Cost(TSQR) = ( ~ same, ~ same, log p ) Oops GMRES, ver. 3: W = [ v, Av, A 2 v, …, A s v ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1W2W3W4W1W2W3W4 R1R2R3R4R1R2R3R4 R 12 R 34 R 1234

167 Minimizing Communication Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS) = ( 9sn 2 /p, 4sn / p ½, 8 ) + ( 2s 2 n 2 /p, s 2 log p / 2, s 2 log p / 2 ) How much latency cost from MGS can you avoid? Almost all Cost(TSQR) = ( ~ same, ~ same, log p ) Oops – W from power method, precision lost! GMRES, ver. 3: W = [ v, Av, A 2 v, …, A s v ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem W = W1W2W3W4W1W2W3W4 R1R2R3R4R1R2R3R4 R 12 R 34 R 1234

168

169 Minimizing Communication Cost(GMRES, ver. 3) = Cost(W) + Cost(TSQR) = = ( 9sn 2 /p, 4sn / p ½, 8 ) + ( 2s 2 n 2 /p, s 2 log p / 2, log p ) Latency cost independent of s, just log p – optimal Oops – W from power method, so precision lost – What to do? Use a different polynomial basis Not Monomial basis W = [v, Av, A 2 v, …], instead … Newton Basis W N = [v, (A – θ 1 I)v, (A – θ 2 I)(A – θ 1 I)v, …] or Chebyshev Basis W C = [v, T 1 (v), T 2 (v), …]

170

171 Tuning Higher Level Algorithms So far we have tuned a single sparse matrix kernel y = A T *A*x motivated by higher level algorithm (SVD) What can we do by extending tuning to a higher level? Consider Krylov subspace methods for Ax=b, Ax = x Conjugate Gradients (CG), GMRES, Lanczos, … Inner loop does y=A*x, dot products, saxpys, scalar ops Inner loop costs at least O(1) messages k iterations cost at least O(k) messages Our goal: show how to do k iterations with O(1) messages Possible payoff – make Krylov subspace methods much faster on machines with slow networks Memory bandwidth improvements too (not discussed) Obstacles: numerical stability, preconditioning, …

172 Krylov Subspace Methods for Solving Ax=b Compute a basis for a subspace V by doing y = A*x k times Find “best” solution in that Krylov subspace V Given starting vector x 1, V spanned by x 2 = A*x 1, x 3 = A*x 2, …, x k = A*x k-1 GMRES finds an orthogonal basis of V by Gram-Schmidt, so it actually does a different set of SpMVs than in last bullet

173 Example: Standard GMRES r = b - A*x 1,  = length(r), v 1 = r /  … length(r) = sqrt(  r i 2 ) for m= 1 to k do w = A*v m … at least O(1) messages for i = 1 to m do … Gram-Schmidt h im = dotproduct(v i, w ) … at least O(1) messages, or log(p) w = w – h im * v i end for h m+1,m = length(w) … at least O(1) messages, or log(p) v m+1 = w / h m+1,m end for find y minimizing length( H k * y –  e 1 ) … small, local problem new x = x 1 + V k * y … V k = [v 1, v 2, …, v k ] O(k 2 ), or O(k 2 log p ), messages altogether

174 Example: Computing [Ax,A 2 x,A 3 x,…,A k x] for A tridiagonal Different basis for same Krylov subspace What can Proc 1 compute without communication? x(1:30): (Ax)(1:30): (A 2 x)(1:30): (A 8 x)(1:30): Proc 1 Proc 2...

175 Example: Computing [Ax,A 2 x,A 3 x,…,A k x] for A tridiagonal x(1:30): (Ax)(1:30): (A 2 x)(1:30):... (A 8 x)(1:30): Proc 1 Proc 2 Computing missing entries with 1 communication, redundant work

176 x(1:30): (Ax)(1:30): (A 2 x)(1:30):... (A 8 x)(1:30): Proc 1 Proc 2 Saving half the redundant work Example: Computing [Ax,A 2 x,A 3 x,…,A k x] for A tridiagonal

177 Example: Computing [Ax,A 2 x,A 3 x,…,A k x] for Laplacian A = 5pt Laplacian in 2D, Communicated point for k=3 shown

178 Latency-Avoiding GMRES (1) r = b - A*x 1,  = length(r), v 1 = r /  …  O(log p) messages W k+1 = [v 1, A * v 1, A 2 * v 1, …, A k * v 1 ] … O(1) messages [Q, R] = qr(W k+1 ) … QR decomposition, O(log p) messages H k = R(:, 2:k+1) * (R(1:k,1:k)) -1 … small, local problem find y minimizing length( H k * y –  e 1 ) … small, local problem new x = x 1 + Q k * y … local problem O(log p) messages altogether Independent of k

179 Latency-Avoiding GMRES (2) [Q, R] = qr(W k+1 ) … QR decomposition, O(log p) messages Easy, but not so stable way to do it: X(myproc) = W k+1 T (myproc) * W k+1 (myproc) … local computation Y = sum_reduction(X(myproc)) … O(log p) messages … Y = W k+1 T * W k+1 R = (cholesky(Y)) T … small, local computation Q(myproc) = W k+1 (myproc) * R -1 … local computation

180 Numerical example (1) Diagonal matrix with n=1000, A ii from 1 down to 10 -5 Instability as k grows, after many iterations

181 Numerical Example (2) Partial remedy: restarting periodically (every 120 iterations) Other remedies: high precision, different basis than [v, A * v, …, A k * v ]

182 Operation Counts for [Ax,A 2 x,A 3 x,…,A k x] on p procs ProblemPer-proc costStandardOptimized 1D mesh#messages2k2 (tridiagonal)#words sent2k #flops5kn/p 5kn/p + 5k 2 memory(k+4)n/p(k+4)n/p + 8k 3D mesh#messages26k26 27 pt stencil#words sent 6kn 2 p -2/3 + 12knp -1/3 + O(k) 6kn 2 p -2/3 + 12k 2 np -1/3 + O(k 3 ) #flops 53kn 3 /p53kn 3 /p + O(k 2 n 2 p -2/3 ) memory (k+28)n 3 /p+ 6n 2 p -2/3 + O(np -1/3 ) (k+28)n 3 /p + 168kn 2 p -2/3 + O(k 2 np -1/3 )

183 Summary and Future Work Dense –LAPACK –ScaLAPACK Communication primitives Sparse –Kernels, Stencils –Higher level algorithms All of the above on new architectures –Vector, SMPs, multicore, Cell, … High level support for tuning –Specification language –Integration into compilers

184 Extra Slides

185 A Sparse Matrix You Encounter Every Day Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your e-mail inbox.)

186 What about the Google Matrix? Google approach –Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix –At query-time: return list of pages ordered by rank Matrix: A =  G + (1-  )(1/n)uu T –Markov model: Surfer follows link with probability , jumps to a random page with probability 1-  –G is n x n connectivity matrix [n  billions] g ij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) –u is a vector of all 1 values –Steady-state probability x i of landing on page i is solution to x = Ax Approximate x by power method: x = A k x 0 –In practice, k  25

187 Current Work Public software release Impact on library designs: Sparse BLAS, Trilinos, PETSc, … Integration in large-scale applications –DOE: Accelerator design; plasma physics –Geophysical simulation based on Block Lanczos ( A T A*X ; LBL) Systematic heuristics for data structure selection? Evaluation of emerging architectures –Revisiting vector micros Other sparse kernels –Matrix triple products, A k *x Parallelism Sparse benchmarks (with UTK) [Gahvari & Hoemmen] Automatic tuning of MPI collective ops [Nishtala, et al.]

188 Summary of High-Level Themes “Kernel-centric” optimization –Vs. basic block, trace, path optimization, for instance –Aggressive use of domain-specific knowledge Performance bounds modeling –Evaluating software quality –Architectural characterizations and consequences Empirical search –Hybrid on-line/run-time models Statistical performance models –Exploit information from sampling, measuring

189 Related Work My bibliography: 337 entries so far Sample area 1: Code generation –Generative & generic programming –Sparse compilers –Domain-specific generators Sample area 2: Empirical search-based tuning –Kernel-centric linear algebra, signal processing, sorting, MPI, … –Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self- tuning compilers, continuous program optimization

190 Future Directions (A Bag of Flaky Ideas) Composable code generators and search spaces New application domains –PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels –rich mathematical structure germane to performance; lots of hardware New tuning environments –Parallel, Grid, “whole systems” Statistical models of application performance –Statistical learning of concise parametric models from traces for architectural evaluation –Compiler/automatic derivation of parametric models

191 Possible Future Work Different Architectures –New FP instruction sets (SSE2) –SMP / multicore platforms –Vector architectures Different Kernels Higher Level Algorithms –Parallel versions of kenels, with optimized communication –Block algorithms (eg Lanczos) –XBLAS (extra precision) Produce Benchmarks –Augment HPCC Benchmark Make it possible to combine optimizations easily for any kernel Related tuning activities (LAPACK & ScaLAPACK)

192 Review of Tuning by Illustration (Extra Slides)

193 Splitting for Variable Blocks and Diagonals Decompose A = A 1 + A 2 + … A t –Detect “canonical” structures (sampling) –Split –Tune each A i –Improve performance and save storage New data structures –Unaligned block CSR Relax alignment in rows & columns –Row-segmented diagonals

194 Example: Variable Block Row (Matrix #12) 2.1x over CSR 1.8x over RB

195 Example: Row-Segmented Diagonals 2x over CSR

196 Mixed Diagonal and Block Structure

197 Summary Automated block size selection –Empirical modeling and search –Register blocking for SpMV, triangular solve, A T A*x Not fully automated –Given a matrix, select splittings and transformations Lots of combinatorial problems –TSP reordering to create dense blocks (Pinar ’97; Moon, et al. ’04)

198 Extra Slides

199 A Sparse Matrix You Encounter Every Day Who am I? I am a Big Repository Of useful And useless Facts alike. Who am I? (Hint: Not your e-mail inbox.)

200 Problem Context Sparse kernels abound –Models of buildings, cars, bridges, economies, … –Google PageRank algorithm Historical trends –Sparse matrix-vector multiply (SpMV): 10% of peak –2x faster with “hand-tuning” –Tuning becoming more difficult over time –Promise of automatic tuning: PHiPAC/ATLAS, FFTW, … Challenges to high-performance –Not dense linear algebra! Complex data structures: indirect, irregular memory access Performance depends strongly on run-time inputs

201 Key Questions, Ideas, Conclusions How to tune basic sparse kernels automatically? –Empirical modeling and search Up to 4x speedups for SpMV 1.8x for triangular solve 4x for A T A*x; 2x for A 2 *x 7x for multiple vectors What are the fundamental limits on performance? –Kernel-, machine-, and matrix-specific upper bounds Achieve 75% or more for SpMV, limiting low-level tuning Consequences for architecture? General techniques for empirical search-based tuning? –Statistical models of performance

202 Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance

203 Matrix-vector multiply kernel: y (i)  y (i) + A (i,j) *x (j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]] Compressed Sparse Row (CSR) Storage Matrix-vector multiply kernel: y (i)  y (i) + A (i,j) *x (j) for each row i for k=ptr[i] to ptr[i+1] do y[i] = y[i] + val[k]*x[ind[k]]

204 Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Statistical models of performance

205 Historical Trends in SpMV Performance The Data –Uniprocessor SpMV performance since 1987 –“Untuned” and “Tuned” implementations –Cache-based superscalar micros; some vectors –LINPACK

206 SpMV Historical Trends: Mflop/s

207 Example: The Difficulty of Tuning n = 21216 nnz = 1.5 M kernel: SpMV Source: NASA structural analysis problem

208 Still More Surprises More complicated non-zero structure in general

209 Still More Surprises More complicated non-zero structure in general Example: 3x3 blocking –Logical grid of 3x3 cells

210 Historical Trends: Mixed News Observations –Good news: Moore’s law like behavior –Bad news: “Untuned” is 10% peak or less, worsening –Good news: “Tuned” roughly 2x better today, and improving –Bad news: Tuning is complex –(Not really news: SpMV is not LINPACK) Questions –Application: Automatic tuning? –Architect: What machines are good for SpMV?

211 Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques –SpMV [SC’02; IJHPCA ’04b] –Sparse triangular solve (SpTS) [ICS/POHLL ’02] –A T A*x [ICCS/WoPLA ’03] Upper bounds on performance Statistical models of performance

212 SPARSITY: Framework for Tuning SpMV SPARSITY: Automatic tuning for SpMV [Im & Yelick ’99] –General approach Identify and generate implementation space Search space using empirical models & experiments –Prototype library and heuristic for choosing register block size Also: cache-level blocking, multiple vectors What’s new? –New block size selection heuristic Within 10% of optimal — replaces previous version –Expanded implementation space Variable block splitting, diagonals, combinations –New kernels: sparse triangular solve, A T A*x, A  *x

213 Automatic Register Block Size Selection Selecting the r x c block size –Off-line benchmark: characterize the machine Precompute Mflops(r,c) using dense matrix for each r x c Once per machine/architecture –Run-time “search”: characterize the matrix Sample A to estimate Fill(r,c) for each r x c –Run-time heuristic model Choose r, c to maximize Mflops(r,c) / Fill(r,c) Run-time costs –Up to ~40 SpMVs (empirical worst case)

214 Accuracy of the Tuning Heuristics (1/4) NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”) DGEMV

215 Accuracy of the Tuning Heuristics (2/4) DGEMV

216 Accuracy of the Tuning Heuristics (3/4) DGEMV

217 Accuracy of the Tuning Heuristics (4/4) DGEMV

218 Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance –SC’02 Statistical models of performance

219 Motivation for Upper Bounds Model Questions –Speedups are good, but what is the speed limit? Independent of instruction scheduling, selection –What machines are “good” for SpMV?

220 Upper Bounds on Performance: Blocked SpMV P = (flops) / (time) –Flops = 2 * nnz(A) Lower bound on time: Two main assumptions –1. Count memory ops only (streaming) –2. Count only compulsory, capacity misses: ignore conflicts Account for line sizes Account for matrix size and nnz Charge min access “latency”  i at L i cache &  mem –e.g., Saavedra-Barrera and PMaC MAPS benchmarks

221 Example: Bounds on Itanium 2

222

223

224 Fraction of Upper Bound Across Platforms

225 Achieved Performance and Machine Balance Machine balance [Callahan ’88; McCalpin ’95] –Balance = Peak Flop Rate / Bandwidth (flops / double) Ideal balance for mat-vec:  2 flops / double –For SpMV, even less SpMV ~ streaming –1 / (avg load time to stream 1 array) ~ (bandwidth) –“Sustained” balance = peak flops / model bandwidth

226

227 Where Does the Time Go? Most time assigned to memory Caches “disappear” when line sizes are equal –Strictly increasing line sizes

228 Execution Time Breakdown: Matrix 40

229 Speedups with Increasing Line Size

230 Summary: Performance Upper Bounds What is the best we can do for SpMV? –Limits to low-level tuning of blocked implementations –Refinements? What machines are good for SpMV? –Partial answer: balance characterization Architectural consequences? –Example: Strictly increasing line sizes

231 Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance –FDO ’00; IJHPCA ’04a

232 Statistical Models for Automatic Tuning Idea 1: Statistical criterion for stopping a search –A general search model Generate implementation Measure performance Repeat –Stop when probability of being within  of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models –Problem: Choose 1 among m implementations at run-time –Sample performance off-line, build statistical model

233 Example: Select a Matmul Implementation

234 Example: Support Vector Classification

235 Road Map Sparse matrix-vector multiply (SpMV) in a nutshell Historical trends and the need for search Automatic tuning techniques Upper bounds on performance Tuning other sparse kernels Statistical models of performance Summary and Future Work

236 Summary of High-Level Themes “Kernel-centric” optimization –Vs. basic block, trace, path optimization, for instance –Aggressive use of domain-specific knowledge Performance bounds modeling –Evaluating software quality –Architectural characterizations and consequences Empirical search –Hybrid on-line/run-time models Statistical performance models –Exploit information from sampling, measuring

237 Related Work My bibliography: 337 entries so far Sample area 1: Code generation –Generative & generic programming –Sparse compilers –Domain-specific generators Sample area 2: Empirical search-based tuning –Kernel-centric linear algebra, signal processing, sorting, MPI, … –Compiler-centric profiling + FDO, iterative compilation, superoptimizers, self- tuning compilers, continuous program optimization

238 Future Directions (A Bag of Flaky Ideas) Composable code generators and search spaces New application domains –PageRank: multilevel block algorithms for topic-sensitive search? New kernels: cryptokernels –rich mathematical structure germane to performance; lots of hardware New tuning environments –Parallel, Grid, “whole systems” Statistical models of application performance –Statistical learning of concise parametric models from traces for architectural evaluation –Compiler/automatic derivation of parametric models

239 Acknowledgements Super-advisors: Jim and Kathy Undergraduate R.A.s: Attila, Ben, Jen, Jin, Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh See pages xvi—xvii of dissertation.

240 TSP-based Reordering: Before (Pinar ’97; Moon, et al ‘04)

241 TSP-based Reordering: After (Pinar ’97; Moon, et al ‘04) Up to 2x speedups over CSR

242 Example: L2 Misses on Itanium 2 Misses measured using PAPI [Browne ’00]

243 Example: Distribution of Blocked Non-Zeros

244 Register Profile: Itanium 2 190 Mflop/s 1190 Mflop/s

245 Register Profiles: Sun and Intel x86 Ultra 2i - 11%Ultra 3 - 5% Pentium III-M - 15%Pentium III - 21% 72 Mflop/s 35 Mflop/s 90 Mflop/s 50 Mflop/s 108 Mflop/s 42 Mflop/s 122 Mflop/s 58 Mflop/s

246 Register Profiles: IBM and Intel IA-64 Power3 - 17%Power4 - 16% Itanium 2 - 33%Itanium 1 - 8% 252 Mflop/s 122 Mflop/s 820 Mflop/s 459 Mflop/s 247 Mflop/s 107 Mflop/s 1.2 Gflop/s 190 Mflop/s

247 Accurate and Efficient Adaptive Fill Estimation Idea: Sample matrix –Fraction of matrix to sample: s  [0,1] –Cost ~ O(s * nnz) –Control cost by controlling s Search at run-time: the constant matters! Control s automatically by computing statistical confidence intervals –Idea: Monitor variance Cost of tuning –Lower bound: convert matrix in 5 to 40 unblocked SpMVs –Heuristic: 1 to 11 SpMVs

248 Sparse/Dense Partitioning for SpTS Partition L into sparse (L 1,L 2 ) and dense L D : Perform SpTS in three steps: Sparsity optimizations for (1)—(2); DTRSV for (3) Tuning parameters: block size, size of dense triangle

249 SpTS Performance: Power3

250

251 Summary of SpTS and AA T *x Results SpTS — Similar to SpMV –1.8x speedups; limited benefit from low-level tuning AA T x, A T Ax –Cache interleaving only: up to 1.6x speedups –Reg + cache: up to 4x speedups 1.8x speedup over register only –Similar heuristic; same accuracy (~ 10% optimal) –Further from upper bounds: 60—80% Opportunity for better low-level tuning a la PHiPAC/ATLAS Matrix triple products? A k *x ? –Preliminary work

252 Register Blocking: Speedup

253 Register Blocking: Performance

254 Register Blocking: Fraction of Peak

255 Example: Confidence Interval Estimation

256 Costs of Tuning

257 Splitting + UBCSR: Pentium III

258 Splitting + UBCSR: Power4

259 Splitting+UBCSR Storage: Power4

260

261 Example: Variable Block Row (Matrix #13)

262 Dense Tuning is Hard, Too Even dense matrix multiply can be notoriously difficult to tune

263 Dense matrix multiply: surprising performance as register tile size varies.

264

265 Preliminary Results (Matrix Set 2): Itanium 2 Web/IR DenseFEMFEM (var)BioLPEconStat

266 Multiple Vector Performance

267

268 What about the Google Matrix? Google approach –Approx. once a month: rank all pages using connectivity structure Find dominant eigenvector of a matrix –At query-time: return list of pages ordered by rank Matrix: A =  G + (1-  )(1/n)uu T –Markov model: Surfer follows link with probability , jumps to a random page with probability 1-  –G is n x n connectivity matrix [n  3 billion] g ij is non-zero if page i links to page j Normalized so each column sums to 1 Very sparse: about 7—8 non-zeros per row (power law dist.) –u is a vector of all 1 values –Steady-state probability x i of landing on page i is solution to x = Ax Approximate x by power method: x = A k x 0 –In practice, k  25

269

270 MAPS Benchmark Example: Power4

271 MAPS Benchmark Example: Itanium 2

272 Saavedra-Barrera Example: Ultra 2i

273

274 Summary of Results: Pentium III

275 Summary of Results: Pentium III (3/3)

276 Execution Time Breakdown (PAPI): Matrix 40

277 Preliminary Results (Matrix Set 1): Itanium 2 LPFEMFEM (var)AssortedDense

278 Tuning Sparse Triangular Solve (SpTS) Compute x=L -1 *b where L sparse lower triangular, x & b dense L from sparse LU has rich dense substructure –Dense trailing triangle can account for 20—90% of matrix non-zeros SpTS optimizations –Split into sparse trapezoid and dense trailing triangle –Use tuned dense BLAS (DTRSV) on dense triangle –Use Sparsity register blocking on sparse part Tuning parameters –Size of dense trailing triangle –Register block size

279 Sparse Kernels and Optimizations Kernels –Sparse matrix-vector multiply (SpMV): y=A*x –Sparse triangular solve (SpTS): x=T -1 *b –y=AA T *x, y=A T A*x –Powers (y=A k *x), sparse triple-product (R*A*R T ), … Optimization techniques (implementation space) –Register blocking –Cache blocking –Multiple dense vectors (x) –A has special structure (e.g., symmetric, banded, …) –Hybrid data structures (e.g., splitting, switch-to-dense, …) –Matrix reordering How and when do we search? –Off-line: Benchmark implementations –Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

280 Cache Blocked SpMV on LSI Matrix: Ultra 2i A 10k x 255k 3.7M non-zeros Baseline: 16 Mflop/s Best block size & performance: 16k x 64k 28 Mflop/s

281 Cache Blocking on LSI Matrix: Pentium 4 A 10k x 255k 3.7M non-zeros Baseline: 44 Mflop/s Best block size & performance: 16k x 16k 210 Mflop/s

282 Cache Blocked SpMV on LSI Matrix: Itanium A 10k x 255k 3.7M non-zeros Baseline: 25 Mflop/s Best block size & performance: 16k x 32k 72 Mflop/s

283 Cache Blocked SpMV on LSI Matrix: Itanium 2 A 10k x 255k 3.7M non-zeros Baseline: 170 Mflop/s Best block size & performance: 16k x 65k 275 Mflop/s

284 Inter-Iteration Sparse Tiling (1/3) [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A 2 x –t=Ax, y=At Nodes: vector elements Edges: matrix elements a ij y1y1 y2y2 y3y3 y4y4 y5y5 t1t1 t2t2 t3t3 t4t4 t5t5 x1x1 x2x2 x3x3 x4x4 x5x5

285 Inter-Iteration Sparse Tiling (2/3) [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A 2 x –t=Ax, y=At Nodes: vector elements Edges: matrix elements a ij Orange = everything needed to compute y 1 –Reuse a 11, a 12 y1y1 y2y2 y3y3 y4y4 y5y5 t1t1 t2t2 t3t3 t4t4 t5t5 x1x1 x2x2 x3x3 x4x4 x5x5

286 Inter-Iteration Sparse Tiling (3/3) [Strout, et al., ‘01] Let A be 6x6 tridiagonal Consider y=A 2 x –t=Ax, y=At Nodes: vector elements Edges: matrix elements a ij Orange = everything needed to compute y 1 –Reuse a 11, a 12 Grey = y 2, y 3 –Reuse a 23, a 33, a 43 y1y1 y2y2 y3y3 y4y4 y5y5 t1t1 t2t2 t3t3 t4t4 t5t5 x1x1 x2x2 x3x3 x4x4 x5x5

287 Inter-Iteration Sparse Tiling: Issues Tile sizes (colored regions) grow with no. of iterations and increasing out-degree –G likely to have a few nodes with high out-degree (e.g., Yahoo) Mathematical tricks to limit tile size? –Judicious dropping of edges [Ng’01] y1y1 y2y2 y3y3 y4y4 y5y5 t1t1 t2t2 t3t3 t4t4 t5t5 x1x1 x2x2 x3x3 x4x4 x5x5

288 Summary and Questions Need to understand matrix structure and machine –BeBOP: suite of techniques to deal with different sparse structures and architectures Google matrix problem –Established techniques within an iteration –Ideas for inter-iteration optimizations –Mathematical structure of problem may help Questions –Structure of G? –What are the computational bottlenecks? –Enabling future computations? E.g., topic-sensitive PageRank  multiple vector version [Haveliwala ’02] –See www.cs.berkeley.edu/~richie/bebop/intel/google for more info, including more complete Itanium 2 results.

289 Exploiting Matrix Structure Symmetry (numerical or structural) –Reuse matrix entries –Can combine with register blocking, multiple vectors, … Matrix splitting –Split the matrix, e.g., into r x c and 1 x 1 –No fill overhead Large matrices with random structure –E.g., Latent Semantic Indexing (LSI) matrices –Technique: cache blocking Store matrix as 2 i x 2 j sparse submatrices Effective when x vector is large Currently, search to find fastest size

290 Symmetric SpMV Performance: Pentium 4

291 SpMV with Split Matrices: Ultra 2i

292 Cache Blocking on Random Matrices: Itanium Speedup on four banded random matrices.

293 Sparse Kernels and Optimizations Kernels –Sparse matrix-vector multiply (SpMV): y=A*x –Sparse triangular solve (SpTS): x=T -1 *b –y=AA T *x, y=A T A*x –Powers (y=A k *x), sparse triple-product (R*A*R T ), … Optimization techniques (implementation space) –Register blocking –Cache blocking –Multiple dense vectors (x) –A has special structure (e.g., symmetric, banded, …) –Hybrid data structures (e.g., splitting, switch-to-dense, …) –Matrix reordering How and when do we search? –Off-line: Benchmark implementations –Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

294 Register Blocked SpMV: Pentium III

295 Register Blocked SpMV: Ultra 2i

296 Register Blocked SpMV: Power3

297 Register Blocked SpMV: Itanium

298 Possible Optimization Techniques Within an iteration, i.e., computing (G+uu T )*x once –Cache block G*x On linear programming matrices and matrices with random structure (e.g., LSI), 1.5—4x speedups Best block size is matrix and machine dependent –Reordering and/or splitting of G to separate dense structure (rows, columns, blocks) Between iterations, e.g., (G+uu T ) 2 x –(G+uu T ) 2 x = G 2 x + (Gu)u T x + u(u T G)x + u(u T u)u T x Compute Gu, u T G, u T u once for all iterations G 2 x: Inter-iteration tiling to read G only once

299 Multiple Vector Performance: Itanium

300 Sparse Kernels and Optimizations Kernels –Sparse matrix-vector multiply (SpMV): y=A*x –Sparse triangular solve (SpTS): x=T -1 *b –y=AA T *x, y=A T A*x –Powers (y=A k *x), sparse triple-product (R*A*R T ), … Optimization techniques (implementation space) –Register blocking –Cache blocking –Multiple dense vectors (x) –A has special structure (e.g., symmetric, banded, …) –Hybrid data structures (e.g., splitting, switch-to-dense, …) –Matrix reordering How and when do we search? –Off-line: Benchmark implementations –Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

301 SpTS Performance: Itanium (See POHLL ’02 workshop paper, at ICS ’02.)

302 Sparse Kernels and Optimizations Kernels –Sparse matrix-vector multiply (SpMV): y=A*x –Sparse triangular solve (SpTS): x=T -1 *b –y=AA T *x, y=A T A*x –Powers (y=A k *x), sparse triple-product (R*A*R T ), … Optimization techniques (implementation space) –Register blocking –Cache blocking –Multiple dense vectors (x) –A has special structure (e.g., symmetric, banded, …) –Hybrid data structures (e.g., splitting, switch-to-dense, …) –Matrix reordering How and when do we search? –Off-line: Benchmark implementations –Run-time: Estimate matrix properties, evaluate performance models based on benchmark data

303 Optimizing AA T *x Kernel: y=AA T *x, where A is sparse, x & y dense –Arises in linear programming, computation of SVD –Conventional implementation: compute z=A T *x, y=A*z Elements of A can be reused: When a k represent blocks of columns, can apply register blocking.

304 Optimized AA T *x Performance: Pentium III

305 Current Directions Applying new optimizations –Other split data structures (variable block, diagonal, …) –Matrix reordering to create block structure –Structural symmetry New kernels (triple product RAR T, powers A k, …) Tuning parameter selection Building an automatically tuned sparse matrix library –Extending the Sparse BLAS –Leverage existing sparse compilers as code generation infrastructure –More thoughts on this topic tomorrow

306 Related Work Automatic performance tuning systems –PHiPAC [Bilmes, et al., ’97], ATLAS [Whaley & Dongarra ’98] –FFTW [Frigo & Johnson ’98], SPIRAL [Pueschel, et al., ’00], UHFFT [Mirkovic and Johnsson ’00] –MPI collective operations [Vadhiyar & Dongarra ’01] Code generation –FLAME [Gunnels & van de Geijn, ’01] –Sparse compilers: [Bik ’99], Bernoulli [Pingali, et al., ’97] –Generic programming: Blitz++ [Veldhuizen ’98], MTL [Siek & Lumsdaine ’98], GMCL [Czarnecki, et al. ’98], … Sparse performance modeling –[Temam & Jalby ’92], [White & Saddayappan ’97], [Navarro, et al., ’96], [Heras, et al., ’99], [Fraguela, et al., ’99], …

307 More Related Work Compiler analysis, models –CROPS [Carter, Ferrante, et al.]; Serial sparse tiling [Strout ’01] –TUNE [Chatterjee, et al.] –Iterative compilation [O’Boyle, et al., ’98] –Broadway compiler [Guyer & Lin, ’99] –[Brewer ’95], ADAPT [Voss ’00] Sparse BLAS interfaces –BLAST Forum (Chapter 3) –NIST Sparse BLAS [Remington & Pozo ’94]; SparseLib++ –SPARSKIT [Saad ’94] –Parallel Sparse BLAS [Fillipone, et al. ’96]

308 Context: Creating High-Performance Libraries Application performance dominated by a few computational kernels Today: Kernels hand-tuned by vendor or user Performance tuning challenges –Performance is a complicated function of kernel, architecture, compiler, and workload –Tedious and time-consuming Successful automated approaches –Dense linear algebra: ATLAS/PHiPAC –Signal processing: FFTW/SPIRAL/UHFFT

309 Cache Blocked SpMV on LSI Matrix: Itanium

310 Sustainable Memory Bandwidth

311 Multiple Vector Performance: Pentium 4

312 Multiple Vector Performance: Itanium

313 Multiple Vector Performance: Pentium 4

314 Optimized AA T *x Performance: Ultra 2i

315 Optimized AA T *x Performance: Pentium 4

316 Tuning Pays Off—PHiPAC

317 Tuning pays off – ATLAS Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)

318 Register Tile Sizes (Dense Matrix Multiply) 333 MHz Sun Ultra 2i 2-D slice of 3-D space; implementations color- coded by performance in Mflop/s 16 registers, but 2-by-3 tile size fastest

319 High Precision GEMV (XBLAS)

320 High Precision Algorithms (XBLAS) Double-double ( High precision word represented as pair of doubles) –Many variations on these algorithms; we currently use Bailey’s Exploiting Extra-wide Registers –Suppose s(1), …, s(n) have f-bit fractions, SUM has F>f bit fraction –Consider following algorithm for S =  i=1,n s(i) Sort so that |s(1)|  |s(2)|  …  |s(n)| SUM = 0, for i = 1 to n SUM = SUM + s(i), end for, sum = SUM –Theorem (D., Hida) Suppose F<2f (less than double precision) If n  2 F-f + 1, then error  1.5 ulps If n = 2 F-f + 2, then error  2 2f-F ulps (can be  1) If n  2 F-f + 3, then error can be arbitrary (S  0 but sum = 0 ) –Examples s(i) double (f=53), SUM double extended (F=64) – accurate if n  2 11 + 1 = 2049 Dot product of single precision x(i) and y(i) –s(i) = x(i)*y(i) (f=2*24=48), SUM double extended (F=64)  –accurate if n  2 16 + 1 = 65537

321 More Extra Slides

322 Opteron (1.4GHz, 2.8GFlop peak) Beat ATLAS DGEMV’s 365 Mflops Nonsymmetric peak = 504 MFlopsSymmetric peak = 612 MFlops

323 Awards Best Paper, Intern. Conf. Parallel Processing, 2004 –“Performance models for evaluation and automatic performance tuning of symmetric sparse matrix-vector multiply” Best Student Paper, Intern. Conf. Supercomputing, Workshop on Performance Optimization via High-Level Languages and Libraries, 2003 –Best Student Presentation too, to Richard Vuduc –“Automatic performance tuning and analysis of sparse triangular solve” Finalist, Best Student Paper, Supercomputing 2002 –To Richard Vuduc –“Performance Optimization and Bounds for Sparse Matrix-vector Multiply” Best Presentation Prize, MICRO-33: 3 rd ACM Workshop on Feedback-Directed Dynamic Optimization, 2000 –To Richard Vuduc –“Statistical Modeling of Feedback Data in an Automatic Tuning System”

324 Accuracy of the Tuning Heuristics (4/4)

325 Can Match DGEMV Performance DGEMV

326 Fraction of Upper Bound Across Platforms

327 Achieved Performance and Machine Balance Machine balance [Callahan ’88; McCalpin ’95] –Balance = Peak Flop Rate / Bandwidth (flops / double) –Lower is better (I.e. can hope to get higher fraction of peak flop rate) Ideal balance for mat-vec:  2 flops / double –For SpMV, even less

328

329 Where Does the Time Go? Most time assigned to memory Caches “disappear” when line sizes are equal –Strictly increasing line sizes

330 Execution Time Breakdown: Matrix 40

331 Execution Time Breakdown (PAPI): Matrix 40

332 Speedups with Increasing Line Size

333 Summary: Performance Upper Bounds What is the best we can do for SpMV? –Limits to low-level tuning of blocked implementations –Refinements? What machines are good for SpMV? –Partial answer: balance characterization Architectural consequences? –Help to have strictly increasing line sizes

334 Evaluating algorithms and machines for SpMV Metrics –Speedups –Mflop/s (“fair” flops) –Fraction of peak Questions –Speedups are good, but what is “the best?” Independent of instruction scheduling, selection Can SpMV be further improved or not? –What machines are “good” for SpMV?

335 Tuning Dense BLAS —PHiPAC

336 Tuning Dense BLAS– ATLAS Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)

337

338 Statistical Models for Automatic Tuning Idea 1: Statistical criterion for stopping a search –A general search model Generate implementation Measure performance Repeat –Stop when probability of being within  of optimal falls below threshold Can estimate distribution on-line Idea 2: Statistical performance models –Problem: Choose 1 among m implementations at run-time –Sample performance off-line, build statistical model

339 SpMV Historical Trends: Fraction of Peak

340 Motivation for Automatic Performance Tuning of SpMV Historical trends –Sparse matrix-vector multiply (SpMV): 10% of peak or less Performance depends on machine, kernel, matrix –Matrix known at run-time –Best data structure + implementation can be surprising Our approach: empirical performance modeling and algorithm search

341 Accuracy of the Tuning Heuristics (3/4)

342 DGEMV

343 Sparse CALA Summary Optimizing one SpMV too limiting Take k steps of Krylov subspace method –GMRES, CG, Lanczos, Arnoldi –Assume matrix “well-partitioned,” with modest surface-to-volume ratio –Parallel implementation Conventional: O(k log p) messages CALA: O(log p) messages - optimal –Serial implementation Conventional: O(k) moves of data from slow to fast memory CALA: O(1) moves of data – optimal –Can incorporate some preconditioners Need to be able to “compress” interactions between distant i, j Hierarchical, semiseparable matrices … –Lots of speed up possible (measured and modeled) Price: some redundant computation

344 Potential Impact on Applications: T3P Application: accelerator design [Ko] 80% of time spent in SpMV Relevant optimization techniques –Symmetric storage –Register blocking On Single Processor Itanium 2 –1.68x speedup 532 Mflops, or 15% of 3.6 GFlop peak –4.4x speedup with multiple (8) vectors 1380 Mflops, or 38% of peak

345 x Ax A2xA2xA2xA2x A3xA3xA3xA3x A4xA4xA4xA4x A5xA5xA5xA5x A6xA6xA6xA6x A7xA7xA7xA7x A8xA8xA8xA8x Locally Dependent Entries for [x,Ax], A tridiagonal 2 processors Can be computed without communication Proc 1 Proc 2

346 x Ax A2xA2x A3xA3xA3xA3x A4xA4xA4xA4x A5xA5xA5xA5x A6xA6xA6xA6x A7xA7xA7xA7x A8xA8xA8xA8x Locally Dependent Entries for [x,Ax,A 2 x], A tridiagonal 2 processors Can be computed without communication Proc 1 Proc 2

347 x Ax A2xA2x A3xA3x A4xA4xA4xA4x A5xA5xA5xA5x A6xA6xA6xA6x A7xA7xA7xA7x A8xA8xA8xA8x Locally Dependent Entries for [x,Ax,…,A 3 x], A tridiagonal 2 processors Can be computed without communication Proc 1 Proc 2

348 x Ax A2xA2x A3xA3x A4xA4x A5xA5xA5xA5x A6xA6xA6xA6x A7xA7xA7xA7x A8xA8xA8xA8x Locally Dependent Entries for [x,Ax,…,A 4 x], A tridiagonal 2 processors Can be computed without communication Proc 1 Proc 2

349 Sequential [x,Ax,…,A 4 x], with memory hierarchy One read of matrix from slow memory, not k=4 Minimizes words moved = bandwidth cost No redundant work


Download ppt "CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr14."

Similar presentations


Ads by Google