V. Milutinović, G. Rakocevic, S. Stojanović, and Z. Sustran University of Belgrade Oskar Mencer Imperial College, London Oliver Pell Maxeler Technologies,

Slides:



Advertisements
Similar presentations
Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
Advertisements

Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
A Parallel GPU Version of the Traveling Salesman Problem Molly A. O’Neil, Dan Tamir, and Martin Burtscher* Department of Computer Science.
Limits on ILP. Achieving Parallelism Techniques – Scoreboarding / Tomasulo’s Algorithm – Pipelining – Speculation – Branch Prediction But how much more.
GPGPU Introduction Alan Gray EPCC The University of Edinburgh.
Concurrent Data Structures in Architectures with Limited Shared Memory Support Ivan Walulya Yiannis Nikolakopoulos Marina Papatriantafilou Philippas Tsigas.
A Massively Parallel Architecture for Bioinformatics Presented by Md Jamiul Jahid.
Chapter 1 CSF 2009 Computer Performance. Defining Performance Which airplane has the best performance? Chapter 1 — Computer Abstractions and Technology.
CISC 879 : Software Support for Multicore Architectures John Cavazos Dept of Computer & Information Sciences University of Delaware
University of Michigan Electrical Engineering and Computer Science Amir Hormati, Mehrzad Samadi, Mark Woh, Trevor Mudge, and Scott Mahlke Sponge: Portable.
Debunking the 100X GPU vs. CPU Myth: An Evaluation of Throughput Computing on CPU and GPU Presented by: Ahmad Lashgar ECE Department, University of Tehran.
Leveling the Field for Multicore Open Systems Architectures Markus Levy President, EEMBC President, Multicore Association.
Presenter MaxAcademy Lecture Series – V1.0, September 2011 Introduction and Motivation.
Accelerating SQL Database Operations on a GPU with CUDA Peter Bakkum & Kevin Skadron The University of Virginia GPGPU-3 Presentation March 14, 2010.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
V. Milutinović, G. Rakocevic, S. Stojanović, and Z. Sustran University of Belgrade Oskar Mencer Imperial College, London Oliver Pell Maxeler Technologies,
Veljko Milutinović VLSI for SuperComputing: From Applications and Algorithms to Masks and Chips 1/40.
Computer System Architectures Computer System Software
The School of Electrical Engineering University of Belgrade.
Veljko Milutinović Saša Stojanović University of Belgrade 1.
Veljko Milutinović and Saša Stojanović University of Belgrade Oliver Pell and Oskar Mencer Maxeler Technologies 1.
V. Milutinovic, G. Rakocevic, S. Stojanovic, and Z. Sustran University of Belgrade Oskar Mencer Imperial College, London Oliver Pell Maxeler Technologies,
A Perspective on the Limits of Computation Oskar Mencer May 2012.
COMPUTER SCIENCE &ENGINEERING Compiled code acceleration on FPGAs W. Najjar, B.Buyukkurt, Z.Guo, J. Villareal, J. Cortes, A. Mitra Computer Science & Engineering.
Software Pipelining for Stream Programs on Resource Constrained Multi-core Architectures IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEM 2012 Authors:
1/18 Lattice Boltzmann for Blood Flow: A Software Engineering Approach for a DataFlow SuperComputer Nenad Korolija, Tijana Djukic,
Kinshuk Govil, Dan Teodosiu*, Yongqiang Huang, and Mendel Rosenblum
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Architectural Support for Fine-Grained Parallelism on Multi-core Architectures Sanjeev Kumar, Corporate Technology Group, Intel Corporation Christopher.
1 Single-ISA Heterogeneous Multi-Core Architectures: The Potential for Processor Power Reduction Rakesh Kumar, Keith I. Farkas, Norman P. Jouppi, Parthasarathy.
Sogang University Advanced Computing System Chap 1. Computer Architecture Hyuk-Jun Lee, PhD Dept. of Computer Science and Engineering Sogang University.
1 Computer Architecture Research Overview Rajeev Balasubramonian School of Computing, University of Utah
Advanced Computer Architecture, CSE 520 Generating FPGA-Accelerated DFT Libraries Chi-Li Yu Nov. 13, 2007.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology Sections 1.5 – 1.11.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
Porting Irregular Reductions on Heterogeneous CPU-GPU Configurations Xin Huo, Vignesh T. Ravi, Gagan Agrawal Department of Computer Science and Engineering.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
Stream Processing Main References: “Comparing Reyes and OpenGL on a Stream Architecture”, 2002 “Polygon Rendering on a Stream Architecture”, 2000 Department.
GPU Architecture and Programming
V. Milutinović, G. Rakocevic, S. Stojanović, and Z. Sustran University of Belgrade Oskar Mencer Imperial College, London Oliver Pell Maxeler Technologies.
M U N -March 10, Phil Bording1 Computer Engineering of Wave Machines for Seismic Modeling and Seismic Migration R. Phillip Bording March 10, 2005.
V. Milutinović, G. Rakocevic, S. Stojanović, and Z. Sustran University of Belgrade Oskar Mencer Imperial College, London Oliver Pell Maxeler Technologies,
1 - CPRE 583 (Reconfigurable Computing): Reconfigurable Computing Architectures Iowa State University (Ames) Reconfigurable Architectures Forces that drive.
Sep 08, 2009 SPEEDUP – Optimization and Porting of Path Integral MC Code to New Computing Architectures V. Slavnić, A. Balaž, D. Stojiljković, A. Belić,
Introduction to Research 2011 Introduction to Research 2011 Ashok Srinivasan Florida State University Images from ORNL, IBM, NVIDIA.
1)Leverage raw computational power of GPU  Magnitude performance gains possible.
Department of Computer Science MapReduce for the Cell B. E. Architecture Marc de Kruijf University of Wisconsin−Madison Advised by Professor Sankaralingam.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
1 - CPRE 583 (Reconfigurable Computing): Reconfigurable Computing Architectures Iowa State University (Ames) CPRE 583 Reconfigurable Computing Lecture.
Page 1/8 Introduction to Maxeler Computing Veljko Milutinovic,
Sunpyo Hong, Hyesoon Kim
Miloš Kotlar 2012/115 Single Layer Perceptron Linear Classifier.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
Analyzing Memory Access Intensity in Parallel Programs on Multicore Lixia Liu, Zhiyuan Li, Ahmed Sameh Department of Computer Science, Purdue University,
VU-Advanced Computer Architecture Lecture 1-Introduction 1 Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 1.
S. Pardi Frascati, 2012 March GPGPU Evaluation – First experiences in Napoli Silvio Pardi.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
Time and Depth Imaging Algorithms in a Hardware Accelerator Paradigm
Achieving the Ultimate Efficiency for Seismic Analysis
CS161 – Design and Architecture of Computer Systems
Parallel Plasma Equilibrium Reconstruction Using GPU
Introduction to Maxeler Computing Veljko Milutinovic,
Morgan Kaufmann Publishers
Genomic Data Clustering on FPGAs for Compression
FPGAs in AWS and First Use Cases, Kees Vissers
Department of Computer Science University of California, Santa Barbara
Levels of Parallelism within a Single Processor
Levels of Parallelism within a Single Processor
Chapter 01: Introduction
Department of Computer Science University of California, Santa Barbara
Presentation transcript:

V. Milutinović, G. Rakocevic, S. Stojanović, and Z. Sustran University of Belgrade Oskar Mencer Imperial College, London Oliver Pell Maxeler Technologies, London and Palo Alto Michael Flynn Stanford University, Palo Alto 1/55

For Big Data algorithms and for the same hardware price as before, achieving: a) speed-up, b) monthly electricity bills, reduced 20 times c) size, 20 times smaller 2/55

Absolutely all results achieved with: a) all hardware produced in Europe, specifically UK b) all software generated by programmers of EU and WB 3/55

ControlFlow (MultiFlow and ManyFlow): Top500 ranks using Linpack (Japanese K,…) DataFlow: Coarse Grain (HEP) vs. Fine Grain (Maxeler) 4/55

Compiling below the machine code level brings speedups; also a smaller power, size, and cost. The price to pay: The machine is more difficult to program. Consequently: Ideal for WORM applications :) Examples using Maxeler: GeoPhysics (20-40), Banking ( , with JP Morgan 20%), M&C (New York City), Datamining (Google), … 5/55

6

7/55

8/55

9

10

Assumptions: 1. Software includes enough parallelism to keep all cores busy 2. The only limiting factor is the number of cores. tGPU = N * NOPS * CGPU*TclkGPU / NcoresGPU tCPU = N * NOPS * CCPU*TclkCPU /NcoresCPU tDF = NOPS * CDF * TclkDF + (N – 1) * TclkDF / NDF 11/55

DualCore? Which way are the horses going? 12/55

Is it possible to use 2000 chicken instead of two horses? ? == 13/55 What is better, real and anecdotic?

2 x 1000 chickens (CUDA and rCUDA) 14/55

How about ants? 15/55 Data

Marmalade Big Data Input Results 16/55

Factor: 20 to 200 MultiCore/ManyCoreDataflow Machine Level Code Gate Transfer Level 17/55

Factor: 20 MultiCore/ManyCoreDataflow 18/55

Factor: 20 Data Processing Process Control Data Processing Process Control MultiCore/ManyCoreDataFlow 19/55

MultiCore: Explain what to do, to the driver Caches, instruction buffers, and predictors needed ManyCore: Explain what to do, to many sub-drivers Reduced caches and instruction buffers needed DataFlow: Make a field of processing gates: 1C+2nJava+3Java No caches, etc. (300 students/year: BGD, BCN, LjU, ICL,…) 20/55

MultiCore: Business as usual ManyCore: More difficult DataFlow: Much more difficult Debugging both, application and configuration code 21/55

MultiCore/ManyCore: Several minutes DataFlow: Several hours for the real hardware Fortunately, only several minutes for the simulator The simulator supports both the large JPMorgan machine as well as the smallest “University Support” machine Good news: 22/55

23/55

MultiCore: Horse stable ManyCore: Chicken house DataFlow: Ant hole 24/55

MultiCore: Haystack ManyCore: Cornbits DataFlow: Crumbs 25/55

26/55 Small Data: Toy Benchmarks (e.g., Linpack)

27/55 Medium Data (benchmarks favorising NVidia, compared to Intel,…)

28/55 Big Data

Revisiting the Top 500 SuperComputer Benchmarks Our paper in Communications of the ACM Revisiting all major Big Data DM algorithms Massive static parallelism at low clock frequencies Concurrency and communication Concurrency between millions of tiny cores difficult, “jitter” between cores will harm performance at synchronization points Reliability and fault tolerance x fewer nodes, failures much less often Memory bandwidth and FLOP/byte ratio Optimize data choreography, data movement, and the algorithmic computation 29/55

Maxeler Hardware CPUs plus DFEs Intel Xeon CPU cores and up to 4 DFEs with 192GB of RAM DFEs shared over Infiniband Up to 8 DFEs with 384GB of RAM and dynamic allocation of DFEs to CPU servers Low latency connectivity Intel Xeon CPUs and 1-2 DFEs with up to six 10Gbit Ethernet connections MaxWorkstation Desktop development system MaxCloud On-demand scalable accelerated compute resource, hosted in London 30/55

1.Coarse grained, stateful: Business – CPU requires DFE for minutes or hours 2.Fine grained, transactional with shared database: DM – CPU utilizes DFE for ms to s – Many short computations, accessing common database data 3.Fine grained, stateless transactional: Science (FF) – CPU requires DFE for ms to s – Many short computations 31/55 Major Classes of Algorithms, from the Computational Perspective

Long runtime, but: Memory requirements change dramatically based on modelled frequency Number of DFEs allocated to a CPU process can be easily varied to increase available memory Streaming compression Boundary data exchanged over chassis MaxRing 32/55 Coarse Grained: Modeling

DFE DRAM contains the database to be searched CPUs issue transactions find(x, db) Complex search function – Text search against documents – Shortest distance to coordinate (multi-dimensional) – Smith Waterman sequence alignment for genomes Any CPU runs on any DFE that has been loaded with the database – MaxelerOS may add or remove DFEs from the processing group to balance system demands – New DFEs must be loaded with the search DB before use 33/55 Fine Grained, Shared Data: Monitoring

Analyse > 1,000,000 scenarios Many CPU processes run on many DFEs – Each transaction executes on any DFE in the assigned group atomically ~50x MPC-X vs. multi-core x86 node 34/55 Fine Grained, Stateless: The BSOP Control CPU DFE Loop over instruments Random number generator and sampling of underliers Price instruments using Black Scholes Tail analysis on CPU CPU DFE Loop over instruments Random number generator and sampling of underliers Price instruments using Black Scholes Tail analysis on CPU CPU DFE Loop over instruments Random number generator and sampling of underliers Price instruments using Black Scholes Tail analysis on CPU CPU DFE Loop over instruments Random number generator and sampling of underliers Price instruments using Black Scholes Tail analysis on CPU DFE Loop over instruments CPU Market and instruments data Random number generator and sampling of underliers Price instruments using Black Scholes Instrument values Tail analysis on CPU

35/55 Selected Examples

36 36/55

Performance of one MAX2 card vs. 1 CPU core Land case (8 params), speedup of 230x Marine case (6 params), speedup of 190x The CRS Results CPU Coherency MAX2 Coherency 37/55

Seismic Imaging Running on MaxNode servers - 8 parallel compute pipelines per chip - 150MHz => low power consumption! - 30x faster than microprocessors An Implementation of the Acoustic Wave Equation on FPGAs T. Nemeth †, J. Stefani †, W. Liu †, R. Dimond ‡, O. Pell ‡, R.Ergas § † Chevron, ‡ Maxeler, § Formerly Chevron, SEG /55

39

DM for Monitoring and Control in Seismic processing Velocity independent / data driven method to obtain a stack of traces, based on 8 parameters – Search for every sample of each output trace Trace Stacking: Speed-up 217 P. Marchetti et al, 2010  parameters  ( emergence angle & azimuth   Normal Wave front parameters  K N,11 ; K N,12 ; K N22   NIP Wave front parameters  ( K Nip,11 ; K Nip,12 ; K Nip22 ) 41/55

42

43

44

This is about algorithmic changes, to maximize the algorithm to architecture match: Data choreography, process modifications, and decision precision. The winning paradigm of Big Data ExaScale? 45/55 Conclusion: Nota Bene

46/8 The TriPeak Siena + BSC + Imperial College + Maxeler + Belgrade 46/55

47/8 The TriPeak MontBlanc = A ManyCore (NVidia) + a MultiCore (ARM) Maxeler = A FineGrain DataFlow (FPGA) How about a happy marriage? MontBlanc (ompSS) and Maxeler (an accelerator) In each happy marriage, it is known who does what :) The Big Data DM algorithms: What part goes to MontBlanc and what to Maxeler? 47/55

48/8 Core of the Symbiotic Success An intelligent DM algorithmic scheduler, partially implemented for compile time, and partially for run time. At compile time: Checking what part of code fits where (MontBlanc or Maxeler): LoC 1M vs 2K vs 20K At run time: Rechecking the compile time decision, based on the current data values. 48/55

49/8 49

50/8 Maxeler: Teaching (Google: prof vm) TEACHING, VLSI, PowerPoints, Maxeler: Maxeler Veljko Explanations, August 2012 Maxeler Veljko Anegdotic, Maxeler Oskar Talk, August 2012 Maxeler Forbes Article Flyer by JP Morgan Flyer by Maxeler HPC Tutorial Slides by Sasha and Veljko: Practice (Current Update) Paper, unconditionally accepted for Advances in Computers by Elsevier Paper, unconditionally accepted for Communications of the ACM Tutorial Slides by Oskar: Theory (7 parts) Slides by Jacob, New York Slides by Jacob, Alabama Slides by Sasha: Practice (Current Update) Maxeler in Meteorology Maxeler in Mathematics Examples generated in Belgrade and Worldwide THE COURSE ALSO INCLUDES DARPA METHODOLOGY FOR MICROPROCESSOR DESIGN, with an example 50/55

51/8 © H. Maurer 51

52/8 Maxeler: Research (Google: good method) Structure of a Typical Research Paper: Scenario #1 [Comparison of Platforms for One Algorithm] Curve A: MultiCore of approximately the same PurchasePrice Curve B: ManyCore of approximately the same PurchasePrice Curve C: Maxeler after a direct algorithm migration Curve D: Maxeler after algorithmic improvements Curve E: Maxeler after data choreography Curve F: Maxeler after precision modifications Structure of a Typical Research Paper: Scenario #2 [Ranking of Algorithms for One Application] CurveSet A: Comparison of Algorithms on a MultiCore CurveSet B: Comparison of Algorithms on a ManyCore CurveSet C: Comparison on Maxeler, after a direct algorithm migration CurveSet D: Comparison on Maxeler, after algorithmic improvements CurveSet E: Comparison on Maxeler, after data choreography CurveSet F: Comparison on Maxeler, after precision modifications 52/55

53/8 © H. Maurer 53

54/8 Maxeler: Topics (Google: HiPeac Berlin) SRB (TR): KG: Blood Flow NS: Combinatorial Math BG1: MiSANU Math BG2: Meteos Meteorology BG3: Physics (Gross Pitaevskii 3D real) BG4: Physics (Gross Pitaevskii 3D imaginary) (reusability with MPI/OpenMP vs effort to accelerate) FP7 (Call 11): University of Siena, Italy, ICL, UK, BSC, Spain, QPLAN, Greece, ETF, Serbia, IJS, Slovenia, … 54/55

55/8 © H. Maurer 55 Q&A