Automatic Performance Tuning Jeremy Johnson Dept. of Computer Science Drexel University.

Slides:



Advertisements
Similar presentations
Statistical Modeling of Feedback Data in an Automatic Tuning System Richard Vuduc, James Demmel (U.C. Berkeley, EECS) Jeff.
Advertisements

Carnegie Mellon Automatic Generation of Vectorized Fast Fourier Transform Libraries for the Larrabee and AVX Instruction Set Extension Automatic Generation.
Offline Adaptation Using Automatically Generated Heuristics Frédéric de Mesmay, Yevgen Voronenko, and Markus Püschel Department of Electrical and Computer.
Fall 2011SYSC 5704: Elements of Computer Systems 1 SYSC 5704 Elements of Computer Systems Optimization to take advantage of hardware.
High Performance Computing The GotoBLAS Library. HPC: numerical libraries  Many numerically intensive applications make use of specialty libraries to.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
Languages and Compilers for High Performance Computing Kathy Yelick EECS Department U.C. Berkeley.
Automatic Performance Tuning of Sparse Matrix Kernels Observations and Experience Performance tuning is tedious and time- consuming work. Richard Vuduc.
Data Locality CS 524 – High-Performance Computing.
Optimization of Sparse Matrix Kernels for Data Mining Eun-Jin Im and Katherine Yelick U.C.Berkeley.
The Power of Belady ’ s Algorithm in Register Allocation for Long Basic Blocks Jia Guo, María Jesús Garzarán and David Padua jiaguo, garzaran,
Automatic Generation of Customized Discrete Fourier Transform IPs Grace Nordin, Peter A. Milder, James C. Hoe, Markus Püschel Carnegie Mellon University.
An Experimental Comparison of Empirical and Model-based Optimization Keshav Pingali Cornell University Joint work with: Kamen Yotov 2,Xiaoming Li 1, Gang.
Parallel & Cluster Computing Linear Algebra Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education.
A Prototypical Self-Optimizing Package for Parallel Implementation of Fast Signal Transforms Kang Chen and Jeremy Johnson Department of Mathematics and.
Carnegie Mellon SPIRAL: An Overview José Moura (CMU) Jeremy Johnson (Drexel) Robert Johnson (MathStar) David Padua (UIUC) Viktor Prasanna (USC) Markus.
SPL: A Language and Compiler for DSP Algorithms Jianxin Xiong 1, Jeremy Johnson 2 Robert Johnson 3, David Padua 1 1 Computer Science, University of Illinois.
High Performance Computing 1 Numerical Linear Algebra An Introduction.
Samuel Williams, John Shalf, Leonid Oliker, Shoaib Kamil, Parry Husbands, Katherine Yelick Lawrence Berkeley National Laboratory ACM International Conference.
Short Vector SIMD Code Generation for DSP Algorithms
SPIRAL: Current Status José Moura (CMU) Jeremy Johnson (Drexel) Robert Johnson (MathStar) David Padua (UIUC) Viktor Prasanna (USC) Markus Püschel (CMU)
High Performance Linear Transform Program Generation for the Cell BE
1 “How Can We Address the Needs and Solve the Problems in HPC Benchmarking?” Jack Dongarra Innovative Computing Laboratory University of Tennesseehttp://
Implementation of Fast Fourier Transform on General Purpose Computers Tianxiang Yang.
FFT: Accelerator Project Rohit Prakash Anand Silodia.
Carnegie Mellon Generating High-Performance General Size Linear Transform Libraries Using Spiral Yevgen Voronenko Franz Franchetti Frédéric de Mesmay Markus.
A Data Cache with Dynamic Mapping P. D'Alberto, A. Nicolau and A. Veidenbaum ICS-UCI Speaker Paolo D’Alberto.
High Performance Scalable Base-4 Fast Fourier Transform Mapping Greg Nash Centar 2003 High Performance Embedded Computing Workshop
Spiral: an empirical search system for program generation and optimization David Padua Department of Computer Science University of Illinois at Urbana-
Distributed WHT Algorithms Kang Chen Jeremy Johnson Computer Science Drexel University Franz Franchetti Electrical and Computer Engineering.
Accelerating the Singular Value Decomposition of Rectangular Matrices with the CSX600 and the Integrable SVD September 7, 2007 PaCT-2007, Pereslavl-Zalessky.
Generic Code Optimization Jack Dongarra, Shirley Moore, Keith Seymour, and Haihang You LACSI Symposium Automatic Tuning of Whole Applications Workshop.
2007/11/2 First French-Japanese PAAP Workshop 1 The FFTE Library and the HPC Challenge (HPCC) Benchmark Suite Daisuke Takahashi Center for Computational.
Investigating Adaptive Compilation using the MIPSpro Compiler Keith D. Cooper Todd Waterman Department of Computer Science Rice University Houston, TX.
Carnegie Mellon High-Performance Code Generation for FIR Filters and the Discrete Wavelet Transform Using SPIRAL Aca Gačić Markus Püschel José M. F. Moura.
1. 2 Define the purpose of MKL Upon completion of this module, you will be able to: Identify and discuss MKL contents Describe the MKL EnvironmentDiscuss.
The Price of Cache-obliviousness Keshav Pingali, University of Texas, Austin Kamen Yotov, Goldman Sachs Tom Roeder, Cornell University John Gunnels, IBM.
Cache-oblivious Programming. Story so far We have studied cache optimizations for array programs –Main transformations: loop interchange, loop tiling.
An Experimental Comparison of Empirical and Model-based Optimization Kamen Yotov Cornell University Joint work with: Xiaoming Li 1, Gang Ren 1, Michael.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Library Generators and Program Optimization David Padua University of Illinois at Urbana-Champaign.
Compilers as Collaborators and Competitors of High-Level Specification Systems David Padua University of Illinois at Urbana-Champaign.
A Language for the Compact Representation of Multiple Program Versions Sébastien Donadio 1,2, James Brodman 3, Thomas Roeder 4, Kamen Yotov 4, Denis Barthou.
TI Information – Selective Disclosure Implementation of Linear Algebra Libraries for Embedded Architectures Using BLIS September 28, 2015 Devangi Parikh.
Performance Analysis of Divide and Conquer Algorithms for the WHT Jeremy Johnson Mihai Furis, Pawel Hitczenko, Hung-Jen Huang Dept. of Computer Science.
Empirical Optimization. Context: HPC software Traditional approach  Hand-optimized code: (e.g.) BLAS  Problem: tedious to write by hand Alternatives:
Parallel Programming & Cluster Computing Linear Algebra Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Intro to Scientific Libraries Intro to Scientific Libraries Blue Waters Undergraduate Petascale Education Program May 29 – June
In Search of the Optimal WHT Algorithm J. R. Johnson Drexel University Markus Püschel CMU
University of Tennessee Automatically Tuned Linear Algebra Software (ATLAS) R. Clint Whaley University of Tennessee
Tools and Libraries for Manycore Computing Kathy Yelick U.C. Berkeley and LBNL.
Optimizing the Performance of Sparse Matrix-Vector Multiplication
05/23/11 Evaluation and Benchmarking of Highly Scalable Parallel Numerical Libraries Christos Theodosiou User and Application Support.
A Comparison of Cache-conscious and Cache-oblivious Programs
Empirical Search and Library Generators
Sparse Matrix-Vector Multiplication (Sparsity, Bebop)
Automatic Performance Tuning
for more information ... Performance Tuning
High Performance Computing (CS 540)
STUDY AND IMPLEMENTATION
Optimizing MMM & ATLAS Library Generator
A Comparison of Cache-conscious and Cache-oblivious Codes
What does it take to produce near-peak Matrix-Matrix Multiply
Cache-oblivious Programming
The Price of Cache-obliviousness
Outline Announcements Loading BLAS Loading LAPACK Add/drop by Monday
Presentation transcript:

Automatic Performance Tuning Jeremy Johnson Dept. of Computer Science Drexel University

Outline Scientific Computation Kernels –Matrix Multiplication –Fast Fourier Transform (FFT) Automated Performance Tuning (IEEE Proc. Vol. 93, No. 2, Feb. 2005) –ATLAS –FFTW –SPIRAL

Matrix Multiplication and the FFT

Basic Linear Algebra Subprograms (BLAS) Level 1 – vector-vector, O(n) data, O(n) operations Level 2 – matrix-vector, O(n 2 ) data, O(n 2 ) operations Level 3 – matrix-matrix, O(n 2 ) data, O(n 3 ) operations = data reuse = locality! LAPACK built on top of BLAS (level 3) –Blocking (for the memory hierarchy) is the single most important optimization for linear algebra algorithms GEMM – General Matrix Multiplication –SUBROUTINE DGEMM (TRANSA, TRANSB, M, N, K, ALPHA, A, LDA, B, LDB, BETA, C, LDC ) –C := alpha*op( A )*op( B ) + beta*C, –where op(X) = X or X’

DGEMM … * Form C := alpha*A*B + beta*C. * DO 90, J = 1, N IF( BETA.EQ.ZERO )THEN DO 50, I = 1, M C( I, J ) = ZERO 50 CONTINUE ELSE IF( BETA.NE.ONE )THEN DO 60, I = 1, M C( I, J ) = BETA*C( I, J ) 60 CONTINUE END IF DO 80, L = 1, K IF( B( L, J ).NE.ZERO )THEN TEMP = ALPHA*B( L, J ) DO 70, I = 1, M C( I, J ) = C( I, J ) + TEMP*A( I, L ) 70 CONTINUE END IF 80 CONTINUE 90 CONTINUE …

Matrix Multiplication Performance

Numeric Recipes Numeric Recipes in C – The Art of Scientific Computing, 2 nd Ed. –William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery, Cambridge University Press, “This book is unique, we think, in offering, for each topic considered, a certain amount of general discussion, a certain amount of analytical mathematics, a certain amount of discussion of algorithmics, and (most important) actual implementations of these ideas in the form of working computer routines. 1. Preliminarys 2. Solutions of Linear Algebraic Equations … 12. Fast Fourier Transform 19. Partial Differential Equations 20. Less Numerical Algorithms

four1

four1 (cont)

FFT Performance

Atlas Architecture and Search Parameters N B – L1 data cache tile size NCN B – L1 data cache tile size for non-copying version M U, N U – Register tile size K U – Unroll factor for k’ loop L S – Latency for computation scheduling FMA – 1 if fused multiply-add available, 0 otherwise F F, I F, N F – Scheduling of loads Yotov et al., Is Search Really Necessary to Generate High-Performance BLAS?, Proc. IEEE, Vol. 93, No. 2, Feb. 2005

ATLAS Code Generation Optimization for locality –Cache tiling, Register tiling

ATLAS Code Generation Register Tiling –MU + NU + MU×NU ≤ NR Loop unrolling Scalar replacement Add/mul interleaving Loop skewing C i’’j’’ = C i’’j’’ + A i’’k’’ *B k’’j’’ AC B NU MU K K NB mul 1 mul 2 … mul Ls add 1 mul Ls+1 add 2 … mul Mu×Nu add Mu×Nu-Ls+2 … add Mu×Nu

ATLAS Search Estimate Machine Parameters (C 1, N R, FMA, L S ) –Used to bound search Orthogonal Line Search (fix all parameters except one and search for the optimal value of this parameter) –Search order NB MU, NU KU LS FF, IF, NF NCNB Cleanup codes

Using FFTW

FFTW Infrastructure Use dynamic programming to find an efficient way to combine code sequences. Combine code sequences using divide and conquer structure in FFT Codelets (optimized code sequences for small FFTs) Plan encodes divide and conquer strategy and stores “twiddle factors” Executor computes FFT of given data using algorithm described by plan Right Recursive

SPIRAL system DSP transform specifies user goes for a coffee Formula Generator SPL Compiler Search Engine runtime on given platform controls implementation options controls algorithm generation fast algorithm as SPL formula C/Fortran/SIMD code S P I R A L (or an espresso for small transforms) platform-adapted implementation comes back Mathematician Expert Programmer

DSP Algorithms: Example 4-point DFT Cooley/Tukey FFT (size 4):  algorithms reduce arithmetic cost O(n^2)  O(nlog(n))  product of structured sparse matrices  mathematical notation exhibits structure Fourier transform Identity Permutation Diagonal matrix (twiddles) Kronecker product

Algorithms = Ruletrees = Formulas R1 R3 R6 R4 R3 R1 R6 R4 R1 R6

Generated DFT Vector Code: Pentium 4, SSE (Pseudo) gflop/s DFT 2 n single precision, Pentium 4, 2.53 GHz, using Intel C compiler 6.0 n speedups (to C code) up to factor of 3.1 hand-tuned vendor assembly code

Best DFT Trees, size = 1024 Best DFT Trees, size 2 10 = 1024 scalar C vect SIMD Pentium 4 float Pentium 4 double Pentium III float AthlonXP float trees platform/datatype dependent

Crosstiming of best trees on Pentium 4 Slowdown factor w.r.t. best DFT 2 n single precision, runtime of best found of other platforms n software adaptation is necessary