P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 1 Auto-tuning Performance on.

Slides:



Advertisements
Similar presentations
Chapter 7 Multicores, Multiprocessors, and Clusters.
Advertisements

L AWRENCE B ERKELEY N ATIONAL L ABORATORY FUTURE TECHNOLOGIES GROUP The Roofline Model Samuel Williams Lawrence Berkeley National Laboratory 1
OpenMP Optimization National Supercomputing Service Swiss National Supercomputing Center.
4. Shared Memory Parallel Architectures 4.4. Multicore Architectures
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Lecture 6: Multicore Systems
A Scalable and Reconfigurable Search Memory Substrate for High Throughput Packet Processing Sangyeun Cho and Rami Melhem Dept. of Computer Science University.
Implementation of 2-D FFT on the Cell Broadband Engine Architecture William Lundgren Gedae), Kerry Barnes (Gedae), James Steed (Gedae)
Benchmarking Sparse Matrix-Vector Multiply In 5 Minutes Hormozd Gahvari, Mark Hoemmen, James Demmel, and Kathy Yelick January 21, 2007 Hormozd Gahvari,
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
Languages and Compilers for High Performance Computing Kathy Yelick EECS Department U.C. Berkeley.
L AWRENCE B ERKELEY N ATIONAL L ABORATORY FUTURE TECHNOLOGIES GROUP 1 Auto-tuning Memory Intensive Kernels for Multicore Sam Williams
C O M P U T A T I O N A L R E S E A R C H D I V I S I O N BIPS Implicit and Explicit Optimizations for Stencil Computations Shoaib Kamil 1,2, Kaushik Datta.
BIPS C O M P U T A T I O N A L R E S E A R C H D I V I S I O N Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms Samuel Williams.
P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB Auto-tuning Stencil Codes for.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
Minisymposia 9 and 34: Avoiding Communication in Linear Algebra Jim Demmel UC Berkeley bebop.cs.berkeley.edu.
P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 1 Auto-tuning Sparse Matrix.
Analysis and Performance Results of a Molecular Modeling Application on Merrimac Erez, et al. Stanford University 2004 Presented By: Daniel Killebrew.
Chapter 17 Parallel Processing.
P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB Tuning Stencils Kaushik Datta.
Bandwidth Avoiding Stencil Computations By Kaushik Datta, Sam Williams, Kathy Yelick, and Jim Demmel, and others Berkeley Benchmarking and Optimization.
C O M P U T A T I O N A L R E S E A R C H D I V I S I O N BIPS Tuning Sparse Matrix Vector Multiplication for multi-core SMPs Samuel Williams 1,2, Richard.
Authors: Tong Li, Dan Baumberger, David A. Koufaty, and Scott Hahn [Systems Technology Lab, Intel Corporation] Source: 2007 ACM/IEEE conference on Supercomputing.
Tuning Sparse Matrix Vector Multiplication for multi-core SMPs (paper to appear at SC07) Sam Williams
P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 1 Autotuning Sparse Matrix.
Samuel Williams, John Shalf, Leonid Oliker, Shoaib Kamil, Parry Husbands, Katherine Yelick Lawrence Berkeley National Laboratory ACM International Conference.
1 Chapter 04 Authors: John Hennessy & David Patterson.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Cell processor implementation of a MILC lattice QCD application.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Sparse Matrix-Vector Multiplication on Throughput-Oriented Processors
Memory Intensive Benchmarks: IRAM vs. Cache Based Machines Parry Husbands (LBNL) Brain Gaeke, Xiaoye Li, Leonid Oliker, Katherine Yelick (UCB/LBNL), Rupak.
L AWRENCE B ERKELEY N ATIONAL L ABORATORY FUTURE TECHNOLOGIES GROUP 1 A Generalized Framework for Auto-tuning Stencil Computations Shoaib Kamil 1,3, Cy.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
L AWRENCE B ERKELEY N ATIONAL L ABORATORY FUTURE TECHNOLOGIES GROUP 1 Lattice Boltzmann Hybrid Auto-Tuning on High-End Computational Platforms Samuel Williams,
Seunghwa Kang David A. Bader Optimizing Discrete Wavelet Transform on the Cell Broadband Engine.
P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 1 The Roofline Model: A pedagogical.
L AWRENCE B ERKELEY N ATIONAL L ABORATORY FUTURE TECHNOLOGIES GROUP Extracting Ultra-Scale Lattice Boltzmann Performance via Hierarchical and Distributed.
2007/11/2 First French-Japanese PAAP Workshop 1 The FFTE Library and the HPC Challenge (HPCC) Benchmark Suite Daisuke Takahashi Center for Computational.
The Potential of the Cell Processor for Scientific Computing
Investigating Adaptive Compilation using the MIPSpro Compiler Keith D. Cooper Todd Waterman Department of Computer Science Rice University Houston, TX.
Sep 08, 2009 SPEEDUP – Optimization and Porting of Path Integral MC Code to New Computing Architectures V. Slavnić, A. Balaž, D. Stojiljković, A. Belić,
Sparse Matrix Vector Multiply Algorithms and Optimizations on Modern Architectures Ankit Jain, Vasily Volkov CS252 Final Presentation 5/9/2007
A few issues on the design of future multicores André Seznec IRISA/INRIA.
1 Structured Grids and Sparse Matrix Vector Multiplication on the Cell Processor Sam Williams Lawrence Berkeley National Lab University of California Berkeley.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 1 A Vision for Integrating.
Presented by Jeremy S. Meredith Sadaf R. Alam Jeffrey S. Vetter Future Technologies Group Computer Science and Mathematics Division Research supported.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
BIPS C O M P U T A T I O N A L R E S E A R C H D I V I S I O N Tuning Sparse Matrix Vector Multiplication for multi-core SMPs (details in paper at SC07)
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Institute of Software,Chinese Academy of Sciences An Insightful and Quantitative Performance Optimization Chain for GPUs Jia Haipeng.
“Processors” issues for LQCD January 2009 André Seznec IRISA/INRIA.
Analyzing Memory Access Intensity in Parallel Programs on Multicore Lixia Liu, Zhiyuan Li, Ahmed Sameh Department of Computer Science, Purdue University,
P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 1 PERI : Auto-tuning Memory.
Tools and Libraries for Manycore Computing Kathy Yelick U.C. Berkeley and LBNL.
Parallel Computers Today LANL / IBM Roadrunner > 1 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating point.
Optimizing the Performance of Sparse Matrix-Vector Multiplication
University of California, Berkeley
Chapter 6 Parallel Processors from Client to Cloud
Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures Topic 14 The Roofline Visual Performance Model Prof. Zhang Gang
The Parallel Revolution Has Started: Are You Part of the Solution or Part of the Problem? Dave Patterson Parallel Computing Laboratory (Par Lab) & Reliable.
Vector Processing => Multimedia
Parallel Computers Today
Samuel Williams1,2, David Patterson1,
The Coming Multicore Revolution: What Does it Mean for Programming?
Auto-tuning Memory Intensive Kernels for Multicore
Kaushik Datta1,2, Mark Murphy2,
Presentation transcript:

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 1 Auto-tuning Performance on Multicore Computers Samuel Williams David Patterson, advisor and chair Kathy Yelick Sara McMains Ph.D. Dissertation Talk Auto-tuning Performance on Multicore Computers

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 2 Multicore Processors

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 3 Memory Bandwidth Superscalar Era  Single thread performance scaled at 50% per year  Bandwidth increases much more slowly, but we could add additional bits or channels.  Lack of diversity in architecture = lack of individual tuning  Power wall has capped single thread performance Log(Performance) ~50% / year Processor Memory Latency ~25% / year ~12% / year 0% / year 7% / year Multicore is the agreed solution Multicore is the agreed solution

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 4 Superscalar Era  Single thread performance scaled at 50% per year  Bandwidth increases much more slowly, but we could add additional bits or channels.  Lack of diversity in architecture = lack of individual tuning  Power wall has capped single thread performance Log(Performance) ~50% / year ~25% / year Processor Memory ~12% / year 0% / year But, no agreement on what it should look like But, no agreement on what it should look like

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 5  Take existing power limited superscalar processors, and add cores.  Serial code shows no improvement  Parallel performance might improve at 25% per year. = improvement in power efficiency from process tech.  DRAM bandwidth is currently only improving by ~12% per year.  Intel / AMD model. Multicore Era Log(Performance) % / year (serial) % / year (parallel) ~12% / year

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 6  Radically different approach  Give up superscalar OOO cores in favor of small, efficient, in-order cores  Huge, abrupt, shocking drop in single thread performance  May eliminate power wall, allowing many cores, and performance to scale at up to 40% per year  Troubling, the number of DRAM channels many need to double every three years  Niagara, Cell, GPUs Multicore Era Log(Performance) % / year (serial) +40% / year (parallel) +12% / year

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 7  Another option would be to reduce per core performance (and power) every year  Graceful degradation in serial performance  Bandwidth is still an issue Multicore Era Log(Performance) % / year (parallel) -12% / year (serial) +12% / year

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 8 Multicore Era  There are still many other architectural knobs  Superscalar? Dual issue? VLIW ?  Pipeline depth  SIMD width ?  Multithreading (vertical, simultaneous)?  Shared caches vs. Private Caches ?  FMA vs. MUL+ADD vs. MUL or ADD  Clustering of cores  Crossbars vs. Rings vs. Meshes  Currently, no consensus on optimal configuration  As a result, there is a plethora of multicore architectures

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 9 Computational Motifs

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 10 Computational Motifs  Evolved from the Phil Colella’s Seven Dwarfs of Parallel Computing  Numerical Methods common throughout scientific computing  Dense and Sparse Linear Algebra  Computations on Structured and Unstructured Grids  Spectral Methods  N-Body Particle Methods  Monte Carlo  Within each dwarf, there are a number of computational kernels  The Berkeley View, and subsequently Par Lab expanded these to many other domains in computing (embedded, SPEC, DB, Games)  Graph Algorithms  Combinational Logic  Finite State Machines  etc…  rechristened Computational Motifs  Each could be black-boxed into libraries or frameworks by domain experts.  But how do we get good performance given the diversity of architecture?

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 11 Auto-tuning

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 12 Auto-tuning (motivation)  Given a huge diversity of processor architectures, an code hand-optimized for one architecture will likely deliver poor performance on another. Moreover, code optimized for one input data set may deliver poor performance on another.  We want a single code base that delivers performance portability across the breadth of architectures today and into the future.  Auto-tuners are composed of two principal components:  A code generator based on high-level functionality rather than parsing C  And the auto-tuner proper that searches for the optimal parameters for each optimization.  Auto-tuner’s don’t invent or discover optimizations, they search through the parameter space for a variety of known optimizations.  Proven value in Dense Linear Algebra(ATLAS), Spectral(FFTW,SPIRAL), and Sparse Methods(OSKI)

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 13 Auto-tuning (code generation)  The code generator produces many code variants for some numerical kernel using known optimizations and transformations.  For example,  cache blocking adds several parameterized loop nests.  Prefetching adds parameterized intrinsics to the code  Loop unrolling and reordering explicitly unrolls the code. Each unrolling is a unique code variant.  Kernels can have dozens of different optimizations, some of which can produce hundreds of code variants.  The code generators used in this work are kernel-specific, and were written in Perl.

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 14 Auto-tuning (search)  In this work, we use two search techniques:  Exhaustive - examine every combination of parameter for every optimization. (often intractable)  Heuristics - use knowledge of architecture or algorithm to restrict the search space. Parameter Space for Optimization A Parameter Space for Optimization B Parameter Space for Optimization A Parameter Space for Optimization B

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 15 Overview Multicore SMPs The Roofline Model Auto-tuning LBMHD Auto-tuning SpMV Summary Future Work Auto-tuning Performance on Multicore Computers

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 16 Thesis Contributions  Introduced the Roofline Model  Extended auto-tuning to the structured grid motif (specifically LBMHD)  Extended auto-tuning to multicore  Fundamentally different from running auto-tuned serial code on multicore SMPs.  Apply the concept to LBMHD and SpMV.  Analyzed the breadth of multicore architectures in the context of auto-tuned SpMV and LBMHD  Discussed future directions in auto-tuning

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 17 Multicore SMPs Chapter 3 Multicore SMPs Overview Multicore SMPs The Roofline Model Auto-tuning LBMHD Auto-tuning SpMV Summary Future Work

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 18 Multicore SMP Systems _ Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun UltraSPARC T2+ T5140 (Victoria Falls) IBM QS20 (Cell Blade) 667MHz FBDIMMs MCH (4x64b controllers) GB/s(write)21.33 GB/s(read) FSB GB/s Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L GB/s FSB 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K 4GB/s (each direction) Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 8 x 6.4 GB/s (1 per hub per direction) 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers VMT PPE VMT PPE 512K L2 512K L2 SPE 256K MFC <20GB/s (each direction) SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC VMT PPE VMT PPE 512K L2 512K L2

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 19 Multicore SMP Systems (Conventional cache-based memory hierarchy) Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun UltraSPARC T2+ T5140 (Victoria Falls) IBM QS20 (Cell Blade) 667MHz FBDIMMs MCH (4x64b controllers) GB/s(write)21.33 GB/s(read) FSB GB/s Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L GB/s FSB 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K 4GB/s (each direction) Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 8 x 6.4 GB/s (1 per hub per direction) 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 512MB XDR DRAM 25.6 GB/s SPE 256K MFC <20GB/s (each direction) SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC 512MB XDR DRAM 25.6 GB/s SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC EIB (ring network) XDR memory controllers EIB (ring network) XDR memory controllers VMT PPE VMT PPE 512K L2 512K L2 BIF VMT PPE VMT PPE 512K L2 512K L2 BIF

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 20 Multicore SMP Systems (local store-based memory hierarchy) Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun UltraSPARC T2+ T5140 (Victoria Falls) 667MHz FBDIMMs MCH (4x64b controllers) GB/s(write)21.33 GB/s(read) FSB GB/s Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L GB/s FSB 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K 4GB/s (each direction) Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 8 x 6.4 GB/s (1 per hub per direction) 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 21 Multicore SMP Systems (CMT = Chip Multithreading) Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) 667MHz FBDIMMs MCH (4x64b controllers) GB/s(write)21.33 GB/s(read) FSB GB/s Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L GB/s FSB 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K 4GB/s (each direction) Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport IBM QS20 (Cell Blade) BIF 512MB XDR DRAM 25.6 GB/s VMT PPE VMT PPE 512K L2 512K L2 SPE 256K MFC <20GB/s (each direction) SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC BIF 512MB XDR DRAM 25.6 GB/s SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC VMT PPE VMT PPE 512K L2 512K L2 XDR memory controllers EIB (ring network) Sun UltraSPARC T2+ T5140 (Victoria Falls) 8 x 6.4 GB/s (1 per hub per direction) 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 22 Multicore SMP Systems (peak double-precision FLOP rates) Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun UltraSPARC T2+ T5140 (Victoria Falls) IBM QS20 (Cell Blade) 667MHz FBDIMMs MCH (4x64b controllers) GB/s(write)21.33 GB/s(read) FSB GB/s Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L GB/s FSB 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K 4GB/s (each direction) Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 8 x 6.4 GB/s (1 per hub per direction) 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers VMT PPE VMT PPE 512K L2 512K L2 SPE 256K MFC <20GB/s (each direction) SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC VMT PPE VMT PPE 512K L2 512K L GFLOP/s GFLOP/s GFLOP/s GFLOP/s

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 23 Multicore SMP Systems (DRAM pin bandwidth) Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun UltraSPARC T2+ T5140 (Victoria Falls) IBM QS20 (Cell Blade) 667MHz FBDIMMs MCH (4x64b controllers) GB/s(write)21.33 GB/s(read) FSB GB/s Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L GB/s FSB 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K 4GB/s (each direction) Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 8 x 6.4 GB/s (1 per hub per direction) 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers VMT PPE VMT PPE 512K L2 512K L2 SPE 256K MFC <20GB/s (each direction) SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC VMT PPE VMT PPE 512K L2 512K L2 21 GB/s(read) 10 GB/s(write) 21 GB/s 51 GB/s 42 GB/s(read) 21 GB/s(write)

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 24 Multicore SMP Systems (Non-Uniform Memory Access) Intel Xeon E5345 (Clovertown) AMD Opteron 2356 (Barcelona) Sun UltraSPARC T2+ T5140 (Victoria Falls) IBM QS20 (Cell Blade) 667MHz FBDIMMs MCH (4x64b controllers) GB/s(write)21.33 GB/s(read) FSB GB/s Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L2 Core 4MB L2 4MB L GB/s FSB 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K 4GB/s (each direction) Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 667MHz DDR2 DIMMs 667MHz DDR2 DIMMs 10.6 GB/s Opteron 512K Opteron 512K 2MB victim SRI / xbar 2x64b controllers HyperTransport 8 x 6.4 GB/s (1 per hub per direction) 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar 667MHz FBDIMMs GB/s10.66 GB/s 4MB Shared L2 (16 way) (64b interleaved) 4MB Shared L2 (16 way) (64b interleaved) 4 Coherency Hubs 2x128b controllers MT SPARC 179 GB/s90 GB/s MT SPARC Crossbar BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers VMT PPE VMT PPE 512K L2 512K L2 SPE 256K MFC <20GB/s (each direction) SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC BIF 512MB XDR DRAM 25.6 GB/s EIB (ring network) XDR memory controllers SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC SPE 256K MFC VMT PPE VMT PPE 512K L2 512K L2

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 25 Roofline Model Chapter 4 Roofline Model Overview Multicore SMPs The Roofline Model Auto-tuning LBMHD Auto-tuning SpMV Summary Future Work

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 26 Memory Traffic  Total bytes to/from DRAM  Can categorize into:  Compulsory misses  Capacity misses  Conflict misses  Write allocations  …  Oblivious of lack of sub-cache line spatial locality

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 27 Arithmetic Intensity  For purposes of this talk, we’ll deal with floating-point kernels  Arithmetic Intensity ~ Total FLOPs / Total DRAM Bytes  Includes cache effects  Many interesting problems have constant AI (w.r.t. problem size)  Bad given slowly increasing DRAM bandwidth  Bandwidth and Traffic are key optimizations A r i t h m e t i c I n t e n s i t y O( N ) O( log(N) ) O( 1 ) SpMV, BLAS1,2 Stencils (PDEs) Lattice Methods FFTs Dense Linear Algebra (BLAS3) Particle Methods

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 28 Basic Idea  Synthesize communication, computation, and locality into a single visually-intuitive performance figure using bound and bottleneck analysis.  Given a kernel’s arithmetic intensity (based on DRAM traffic after being filtered by the cache), programmers can inspect the figure, and bound performance.  Moreover, provides insights as to which optimizations will potentially be beneficial. Attainable Performance ij = min FLOP/s with Optimizations 1-i AI * Bandwidth with Optimizations 1-j

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 29 Constructing a Roofline Model (computational ceilings)  Plot on log-log scale  Given AI, we can easily bound performance  But architectures are much more complicated  We will bound performance as we eliminate specific forms of in-core parallelism actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ peak DP Stream Bandwidth

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 30 Constructing a Roofline Model (computational ceilings)  Opterons have dedicated multipliers and adders.  If the code is dominated by adds, then attainable performance is half of peak.  We call these Ceilings  They act like constraints on performance actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ peak DP Stream Bandwidth mul / add imbalance

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 31 Constructing a Roofline Model (computational ceilings)  Opterons have 128-bit datapaths.  If instructions aren’t SIMDized, attainable performance will be halved actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ peak DP Stream Bandwidth mul / add imbalance w/out SIMD

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 32 Constructing a Roofline Model (computational ceilings)  On Opterons, floating-point instructions have a 4 cycle latency.  If we don’t express 4-way ILP, performance will drop by as much as 4x actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ w/out SIMD w/out ILP peak DP Stream Bandwidth mul / add imbalance

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 33 Constructing a Roofline Model (communication ceilings)  We can perform a similar exercise taking away parallelism from the memory subsystem actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ peak DP Stream Bandwidth

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 34 Constructing a Roofline Model (communication ceilings)  Explicit software prefetch instructions are required to achieve peak bandwidth actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ peak DP Stream Bandwidth w/out SW prefetch

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 35 Constructing a Roofline Model (communication ceilings)  Opterons are NUMA  As such memory traffic must be correctly balanced among the two sockets to achieve good Stream bandwidth.  We could continue this by examining strided or random memory access patterns actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ peak DP Stream Bandwidth w/out SW prefetch w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 36 Constructing a Roofline Model (computation + communication)  We may bound performance based on the combination of expressed in-core parallelism and attained bandwidth. actual FLOP:Byte ratio attainable GFLOP/s Opteron 2356 (Barcelona) /81/ /41/4 1/21/ w/out SIMD peak DP mul / add imbalance w/out ILP Stream Bandwidth w/out SW prefetch w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 37 Constructing a Roofline Model (locality walls)  Remember, memory traffic includes more than just compulsory misses.  As such, actual arithmetic intensity may be substantially lower.  Walls are unique to the architecture-kernel combination actual FLOP:Byte ratio attainable GFLOP/s /81/ /41/4 1/21/ w/out SIMD mul / add imbalance w/out ILP w/out SW prefetch w/out NUMA Opteron 2356 (Barcelona) peak DP Stream Bandwidth only compulsory miss traffic FLOPs Compulsory Misses AI =

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 38 Constructing a Roofline Model (locality walls)  Remember, memory traffic includes more than just compulsory misses.  As such, actual arithmetic intensity may be substantially lower.  Walls are unique to the architecture-kernel combination actual FLOP:Byte ratio attainable GFLOP/s /81/ /41/4 1/21/ w/out SIMD mul / add imbalance w/out ILP w/out SW prefetch w/out NUMA Opteron 2356 (Barcelona) peak DP Stream Bandwidth only compulsory miss traffic+write allocation traffic FLOPs Allocations + Compulsory Misses AI =

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 39 Constructing a Roofline Model (locality walls)  Remember, memory traffic includes more than just compulsory misses.  As such, actual arithmetic intensity may be substantially lower.  Walls are unique to the architecture-kernel combination actual FLOP:Byte ratio attainable GFLOP/s /81/ /41/4 1/21/ w/out SIMD mul / add imbalance w/out ILP w/out SW prefetch w/out NUMA Opteron 2356 (Barcelona) peak DP Stream Bandwidth only compulsory miss traffic+write allocation traffic+capacity miss traffic FLOPs Capacity + Allocations + Compulsory AI =

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 40 Constructing a Roofline Model (locality walls)  Remember, memory traffic includes more than just compulsory misses.  As such, actual arithmetic intensity may be substantially lower.  Walls are unique to the architecture-kernel combination actual FLOP:Byte ratio attainable GFLOP/s /81/ /41/4 1/21/ w/out SIMD mul / add imbalance w/out ILP w/out SW prefetch w/out NUMA Opteron 2356 (Barcelona) peak DP Stream Bandwidth only compulsory miss traffic+write allocation traffic+capacity miss traffic+conflict miss traffic FLOPs Conflict + Capacity + Allocations + Compulsory AI =

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 41 Roofline Models for SMPs /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (PPEs) w/out FMA w/out ILP peak DP Stream Bandwidth w/out SW prefetchw/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Xeon E5345 (Clovertown) mul / add imbalance w/out SIMD w/out ILP Bandwidth on small datasets peak DP Bandwidth on large datasets /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Opteron 2356 (Barcelona) mul / add imbalance w/out SIMD w/out ILP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (SPEs) w/out SIMD w/out ILP w/out FMA peak DP Stream Bandwidth misaligned DMA w/out NUMA  Note, the multithreaded Niagara is limited by the instruction mix rather than a lack of expressed in-core parallelism  Clearly some architectures are more dependent on bandwidth optimizations while others on in-core optimizations /81/8 actual FLOP:Byte ratio attainable GFLOP/s /41/4 1/21/ UltraSparc T2+ T5140 (Victoria Falls) 25% FP 12% FP 6% FP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 42 Auto-tuning Lattice-Boltzmann Magnetohydrodynamics (LBMHD) Chapter 6 Auto-tuning Lattice-Boltzmann Magnetohydrodynamics (LBMHD) Overview Multicore SMPs The Roofline Model Auto-tuning LBMHD Auto-tuning SpMV Summary Future Work

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 43 Introduction to Lattice Methods  Structured grid code, with a series of time steps  Popular in CFD  Allows for complex boundary conditions  No temporal locality between points in space within one time step  Higher dimensional phase space  Simplified kinetic model that maintains the macroscopic quantities  Distribution functions (e.g velocities per point in space) are used to reconstruct macroscopic quantities  Significant Memory capacity requirements Z +Y +X

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 44 LBMHD (general characteristics)  Plasma turbulence simulation  Two distributions:  momentum distribution (27 scalar components)  magnetic distribution (15 vector components)  Three macroscopic quantities:  Density  Momentum (vector)  Magnetic Field (vector)  Must read 73 doubles, and update 79 doubles per point in space  Requires about 1300 floating point operations per point in space  Just over 1.0 FLOPs/byte (ideal) momentum distribution Y Z +X magnetic distribution Y +Z +X macroscopic variables +Y +Z +X

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 45 LBMHD (implementation details)  Data Structure choices:  Array of Structures: no spatial locality, strided access  Structure of Arrays: huge number of memory streams per thread, but guarantees spatial locality, unit-stride, and vectorizes well  Parallelization  Fortran version used MPI to communicate between nodes. = bad match for multicore  The version in this work uses pthreads for multicore (this thesis is not about innovation in the threading model or programming language)  MPI is not used when auto-tuning  Two problem sizes:  64 3 (~330MB)  (~2.5GB)

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 46 SOA Memory Access Pattern  Consider a simple D2Q9 lattice method using SOA  There are 9 read arrays, and 9 write arrays, but all accesses are unit stride  LBMHD has 73 read and 79 write streams per thread x dimension write_array[ ][ ] read_array[ ][ ] ?

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 47 Roofline Models for SMPs  LBMHD has a AI of 0.7 on write allocate architectures, and 1.0 on those with cache bypass or no write allocate.  MUL / ADD imbalance  Some architectures will be bandwidth-bound, while others compute bound /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (PPEs) w/out FMA w/out ILP peak DP Stream Bandwidth w/out SW prefetchw/out NUMA /81/8 actual FLOP:Byte ratio attainable GFLOP/s /41/4 1/21/ UltraSparc T2+ T5140 (Victoria Falls) 25% FP 12% FP 6% FP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Xeon E5345 (Clovertown) mul / add imbalance w/out SIMD w/out ILP Bandwidth on small datasets peak DP Bandwidth on large datasets /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Opteron 2356 (Barcelona) mul / add imbalance w/out SIMD w/out ILP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (SPEs) w/out SIMD w/out ILP w/out FMA peak DP Stream Bandwidth misaligned DMA w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 48 LBMHD Performance (reference implementation)  Standard cache-based implementation can be easily parallelized with pthreads.  NUMA is implicitly exploited  Although scalability looks good, is performance ? Performance Concurrency & Problem Size

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 49 LBMHD Performance (reference implementation)  Standard cache-based implementation can be easily parallelized with pthreads.  NUMA is implicitly exploited  Although scalability looks good, is performance ?

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 50 LBMHD Performance (reference implementation)  Superscalar performance is surprisingly good given the complexity of the memory access pattern.  Cell PPE performance is abysmal /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (PPEs) w/out FMA w/out ILP peak DP Stream Bandwidth w/out SW prefetchw/out NUMA /81/8 actual FLOP:Byte ratio attainable GFLOP/s /41/4 1/21/ UltraSparc T2+ T5140 (Victoria Falls) 25% FP 12% FP 6% FP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Xeon E5345 (Clovertown) mul / add imbalance w/out SIMD w/out ILP Bandwidth on small datasets peak DP Bandwidth on large datasets /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Opteron 2356 (Barcelona) mul / add imbalance w/out SIMD w/out ILP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (SPEs) w/out SIMD w/out ILP w/out FMA peak DP Stream Bandwidth misaligned DMA w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 51 LBMHD Performance (array padding)  LBMHD touches >150 arrays.  Most caches have limited associativity  Conflict misses are likely  Apply heuristic to pad arrays

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 52 LBMHD Performance (vectorization)  LBMHD touches > 150 arrays.  Most TLBs have << 128 entries.  Vectorization technique creates a vector of points that are being updated.  Loops are interchanged and strip mined  Exhaustively search for the optimal “vector length” that balances page locality with L1 cache misses.

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 53 LBMHD Performance (other optimizations)  Heuristic-based software prefetching  Exhaustive search for low- level optimizations  Loop unrolling/reordering  SIMDization  Cache Bypass increases arithmetic intensity by 50%  Small TLB pages on VF

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 54 LBMHD Performance (Cell SPE implementation)  We can write a local store implementation and run it on the Cell SPEs.  Ultimately, Cell’s weak DP hampers performance

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 55 LBMHD Performance (speedup for largest problem)  We can write a local store implementation and run it on the Cell SPEs.  Ultimately, Cell’s weak DP hampers performance 1.6x 4x 3x 130x

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 56 LBMHD Performance (vs. Roofline)  Most architectures reach their roofline bound performance  Clovertown’s snoop filter is ineffective.  Niagara suffers from instruction mix issues  Cell PPEs are latency limited  Cell SPEs are compute-bound /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (PPEs) w/out FMA w/out ILP peak DP Stream Bandwidth w/out SW prefetchw/out NUMA /81/8 actual FLOP:Byte ratio attainable GFLOP/s /41/4 1/21/ UltraSparc T2+ T5140 (Victoria Falls) 25% FP 12% FP 6% FP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Xeon E5345 (Clovertown) mul / add imbalance w/out SIMD w/out ILP Bandwidth on small datasets peak DP Bandwidth on large datasets /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Opteron 2356 (Barcelona) mul / add imbalance w/out SIMD w/out ILP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (SPEs) w/out SIMD w/out ILP w/out FMA peak DP Stream Bandwidth misaligned DMA w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 57 LBMHD Performance (Summary)  Reference code is clearly insufficient  Portable C code is insufficient on Barcelona and Cell  Cell gets all its performance from the SPEs  despite only 2x the area, and 2x the peak DP FLOPs

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 58 Auto-tuning Sparse Matrix-Vector Multiplication Chapter 8 Auto-tuning Sparse Matrix-Vector Multiplication Overview Multicore SMPs The Roofline Model Auto-tuning LBMHD Auto-tuning SpMV Summary Future Work

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 59 Sparse Matrix- Vector Multiplication  Sparse Matrix  Most entries are 0.0  Performance advantage in only storing/operating on the nonzeros  Requires significant meta data  Evaluate: y = Ax  A is a sparse matrix  x & y are dense vectors  Challenges  Difficult to exploit ILP(bad for superscalar),  Difficult to exploit DLP(bad for SIMD)  Irregular memory access to source vector  Difficult to load balance Axy

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 60 Dataset (Matrices)  Pruned original SPARSITY suite down to 14  none should fit in cache  Subdivided them into 4 categories  Rank ranges from 2K to 1M Dense Protein FEM / Spheres FEM / Cantilever Wind Tunnel FEM / Harbor QCD FEM / Ship EconomicsEpidemiology FEM / Accelerator Circuitwebbase LP 2K x 2K Dense matrix stored in sparse format Well Structured (sorted by nonzeros/row) Poorly Structured hodgepodge Extreme Aspect Ratio (linear programming)

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 61 Roofline Models for SMPs  Reference SpMV implementation has an AI of 0.166, but can readily exploit FMA.  The best we can hope for is an AI of 0.25 for non-symmetric matrices.  All architectures are memory- bound, but some may need in- core optimizations /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (PPEs) w/out ILP w/out FMA peak DP Stream Bandwidth w/out SW prefetchw/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ UltraSparc T2+ T5140 (Victoria Falls) 25% FP 12% FP 6% FP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Xeon E5345 (Clovertown) mul / add imbalance w/out SIMD w/out ILP Bandwidth on small datasets peak DP Bandwidth on large datasets /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Opteron 2356 (Barcelona) mul / add imbalance w/out SIMD w/out ILP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (SPEs) w/out ILP w/out FMA w/out SIMD peak DP Stream Bandwidth misaligned DMA w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 62 SpMV Performance (Reference Implementation)  Reference implementation is CSR.  Simple parallelization by rows balancing nonzeros.  No implicit NUMA exploitation  Despite superscalar’s use of 8 cores, they see little speedup.  Niagara and PPEs show near linear speedups Performance Matrix

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 63 SpMV Performance (Reference Implementation)  Reference implementation is CSR.  Simple parallelization by rows balancing nonzeros.  No implicit NUMA exploitation  Despite superscalar’s use of 8 cores, they see little speedup.  Niagara and PPEs show near linear speedups

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 64 SpMV Performance (Reference Implementation)  Roofline for dense matrix in sparse format.  Superscalars achieve bandwidth- limited performance  Niagara comes very close to bandwidth limit.  Clearly, NUMA and prefetching will be essential /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (PPEs) w/out ILP w/out FMA peak DP Stream Bandwidth w/out SW prefetchw/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ UltraSparc T2+ T5140 (Victoria Falls) 25% FP 12% FP 6% FP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Xeon E5345 (Clovertown) mul / add imbalance w/out SIMD w/out ILP Bandwidth on small datasets peak DP Bandwidth on large datasets /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Opteron 2356 (Barcelona) mul / add imbalance w/out SIMD w/out ILP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (SPEs) w/out ILP w/out FMA w/out SIMD peak DP Stream Bandwidth misaligned DMA w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 65 SpMV Performance (NUMA and Software Prefetching)  NUMA-aware allocation is essential on memory-bound NUMA SMPs.  Explicit software prefetching can boost bandwidth and change cache replacement policies  Cell PPEs are likely latency-limited.  used exhaustive search

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 66 SpMV Performance (Matrix Compression)  After maximizing memory bandwidth, the only hope is to minimize memory traffic.  exploit:  register blocking  other formats  smaller indices  Use a traffic minimization heuristic rather than search  Benefit is clearly matrix-dependent.  Register blocking enables efficient software prefetching (one per cache line)

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 67 SpMV Performance (Cache and TLB Blocking)  Based on limited architectural knowledge, create heuristic to choose a good cache and TLB block size  Hierarchically store the resultant blocked matrix.  Benefit can be significant on the most challenging matrix

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 68 SpMV Performance (Cell SPE implementation)  Cache blocking can be easily transformed into local store blocking.  With a few small tweaks for DMA, we can run a simplified version of the auto-tuner on Cell  BCOO only  2x1 and larger  always blocks

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 69 SpMV Performance (median speedup)  Cache blocking can be easily transformed into local store blocking.  With a few small tweaks for DMA, we can run a simplified version of the auto-tuner on Cell  BCOO only  2x1 and larger  always blocks 2.8x 2.6x 1.4x 15x

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 70 SpMV Performance (vs. Roofline)  Roofline for dense matrix in sparse format.  Compression improves AI  Auto-tuning can allow us to slightly exceed Stream bandwidth (but not pin bandwidth)  Cell PPEs perennially deliver poor performance /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (PPEs) w/out ILP w/out FMA peak DP Stream Bandwidth w/out SW prefetchw/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ UltraSparc T2+ T5140 (Victoria Falls) 25% FP 12% FP 6% FP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Xeon E5345 (Clovertown) mul / add imbalance w/out SIMD w/out ILP Bandwidth on small datasets peak DP Bandwidth on large datasets /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ Opteron 2356 (Barcelona) mul / add imbalance w/out SIMD w/out ILP peak DP Stream Bandwidth w/out SW prefetch w/out NUMA /81/8 actual FLOP:byte ratio attainable GFLOP/s /41/4 1/21/ QS20 Cell Blade (SPEs) w/out ILP w/out FMA w/out SIMD peak DP Stream Bandwidth misaligned DMA w/out NUMA

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 71 SpMV Performance (summary)  Median SpMV performance  aside, unlike LBMHD, SSE was unnecessary to achieve performance  Cell still requires a non-portable, ISA-specific implementation to achieve good performance.  Novel SpMV implementations may require ISA-specific (SSE) code to achieve better performance.

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 72 Summary Overview Multicore SMPs The Roofline Model Auto-tuning LBMHD Auto-tuning SpMV Summary Future Work

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 73 Summary  Introduced the Roofline Model  Apply bound and bottleneck analysis  Performance and requisite optimizations are inferred visually  Extended auto-tuning to multicore  Fundamentally different from running auto-tuned serial code on multicore SMPs.  Apply the concept to LBMHD and SpMV.  Auto-tuning LBMHD and SpMV  Multicore has had a transformative effect on auto-tuning. (move from latency limited to bandwidth limited)  Maximizing memory bandwidth and minimizing memory traffic is key.  Compilers are reasonably effective at in-core optimizations, but totally ineffective at cache and memory issues.  Library or framework is a necessity in managing these issues.  Comments on architecture  Ultimately machines are bandwidth-limited without new algorithms  Architectures with caches required significantly more tuning than the local store- based Cell

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 74 Future Directions in Auto-tuning Chapter 9 Future Directions in Auto-tuning Overview Multicore SMPs The Roofline Model Auto-tuning LBMHD Auto-tuning SpMV Summary Future Work

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 75 Future Work (Roofline)  Automatic generation of Roofline figures  Kernel oblivious  Select computational metric of interest  Select communication channel of interest  Designate common “optimizations”  Requires a benchmark  Using performance counters to generate runtime Roofline figures  Given a real kernel, we wish to understand the bottlenecks to performance.  Much friendlier visualization of performance counter data

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 76 Future Work (Making search tractable) Parameter Space for Optimization A Parameter Space for Optimization B Parameter Space for Optimization A Parameter Space for Optimization B  Given the explosion in optimizations, exhaustive search is clearly not tractable.  Moreover, heuristics require extensive architectural knowledge.  In our SC08 work, we tried a greedy approach (one optimization at a time)  We could make it iterative or, we could make it look like steepest descent (with some local search)

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 77 Future Work (Auto-tuning Motifs)  We could certainly auto-tune other individual kernels in any motif, but this requires building a kernel-specific auto-tuner  However, we should strive for motif-wide auto-tuning.  Moreover, we want to decouple data type (e.g. double precision) from the parallelization structure. 1. A motif description or pattern language for each motif.  e.g. taxonomy of structured grids + code snippet for stencil  write auto-tuner parses these, and produces optimized code. 2. A series of DAG rewrite rules for each motif. Rules allow: Insertion of additional nodes Duplication of nodes Reordering

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 78 Rewrite Rules x yA  Consider SpMV  In FP, each node in the DAG is a MAC  DAG makes locality explicit (e.g. local store blocking)  BCSR adds zeros to the DAG  We can cut edges and reconnect them to enable parallelization.  We can reorder operations  Any other data type/node type conforming to these rules can reuse all our auto-tuning efforts

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 79 Acknowledgments

EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 80 Acknowledgments  Berkeley ParLab  Thesis Committee: David Patterson, Kathy Yelick, Sara McMains  BeBOP group: Jim Demmel, Kaushik Datta, Shoaib Kamil, Rich Vuduc, Rajesh Nishtala, etc…  Rest of ParLab  Lawrence Berkeley National Laboratory  FTG group: Lenny Oliker, John Shalf, Jonathan Carter, …  Hardware Donations and Remote Access  Sun Microsystems  IBM  AMD  FZ Julich  Intel This research was supported by the ASCR Office in the DOE Office of Science under contract number DE-AC02-05CH11231, by Microsoft and Intel funding through award # , and by matching funding by U.C. Discovery through award #DIG

P A R A L L E L C O M P U T I N G L A B O R A T O R Y EECS Electrical Engineering and Computer Sciences B ERKELEY P AR L AB 81 Questions?