Simulation of exascale nodes through runtime hardware monitoring

Slides:



Advertisements
Similar presentations
Jaewoong Sim Alaa R. Alameldeen Zeshan Chishti Chris Wilkerson Hyesoon Kim MICRO-47 | December 2014.
Advertisements

ATI Stream Computing OpenCL™ Histogram Optimization Illustration Marc Romankewicz April 5, 2010.
ATI Stream Computing ACML-GPU – SGEMM Optimization Illustration Micah Villmow May 30, 2008.
Thread Criticality Predictors for Dynamic Performance, Power, and Resource Management in Chip Multiprocessors Abhishek Bhattacharjee Margaret Martonosi.
ATI Stream ™ Physics Neal Robison Director of ISV Relations, AMD Graphics Products Group Game Developers Conference March 26, 2009.
OpenCL™ - Parallel computing for CPUs and GPUs Benedict R. Gaster AMD Products Group Lee Howes Office of the CTO.
Data Marshaling for Multi-Core Architectures M. Aater Suleman Onur Mutlu Jose A. Joao Khubaib Yale N. Patt.
Cooperative Boosting: Needy versus Greedy Power Management INDRANI PAUL 1,2, SRILATHA MANNE 1, MANISH ARORA 1,3, W. LLOYD BIRCHER 1, SUDHAKAR YALAMANCHILI.
1 Improving Hash Join Performance through Prefetching _________________________________________________By SHIMIN CHEN Intel Research Pittsburgh ANASTASSIA.
Coordinated Energy Management in Heterogeneous Processors INDRANI PAUL 1,2, VIGNESH RAVI 1, SRILATHA MANNE 1, MANISH ARORA 1,3, SUDHAKAR YALAMANCHILI 2.
Panel Discussion: The Future of I/O From a CPU Architecture Perspective #OFADevWorkshop Brad Benton AMD, Inc.
HETEROGENEOUS SYSTEM COHERENCE FOR INTEGRATED CPU-GPU SYSTEMS JASON POWER*, ARKAPRAVA BASU*, JUNLI GU †, SOORAJ PUTHOOR †, BRADFORD M BECKMANN †, MARK.
OpenCL Introduction A TECHNICAL REVIEW LU OCT
1 The Performance Potential for Single Application Heterogeneous Systems Henry Wong* and Tor M. Aamodt § *University of Toronto § University of British.
Computer Graphics Graphics Hardware
Chapter 6 : Software Metrics
OpenCL Introduction AN EXAMPLE FOR OPENCL LU OCT
Sequential Consistency for Heterogeneous-Race-Free DEREK R. HOWER, BRADFORD M. BECKMANN, BENEDICT R. GASTER, BLAKE A. HECHTMAN, MARK D. HILL, STEVEN K.
High Performance Embedded Computing © 2007 Elsevier Chapter 1, part 2: Embedded Computing High Performance Embedded Computing Wayne Wolf.
(1) Scheduling for Multithreaded Chip Multiprocessors (Multithreaded CMPs)
NATIONAL INSTITUTE OF TECHNOLOGY KARNATAKA,SURATHKAL Presentation on ZSIM: FAST AND ACCURATE MICROARCHITECTURAL SIMULATION OF THOUSAND-CORE SYSTEMS Publisher’s:
A new perspective on processing-in-memory architecture design These data are submitted with limited rights under Government Contract No. DE-AC52-8MA27344.
ATI Stream Computing ATI Radeon™ HD 2900 Series GPU Hardware Overview Micah Villmow May 30, 2008.
Joseph L. GreathousE, Mayank Daga AMD Research 11/20/2014
Stored Programs In today’s lesson, we will look at: what we mean by a stored program computer how computers store and run programs what we mean by the.
C O N F I D E N T I A LC O N F I D E N T I A L ATI FireGL ™ Workstation Graphics from AMD April 2008 AMD Graphics Product Group.
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
Platform Abstraction Group 3. Question How to deal with different types hardware and software platforms? What detail to expose to the programmer? What.
Computer Organization Instruction Set Architecture (ISA) Instruction Set Architecture (ISA), or simply Architecture, of a computer is the.
STRUCTURAL AGNOSTIC SPMV: ADAPTING CSR-ADAPTIVE FOR IRREGULAR MATRICES MAYANK DAGA AND JOSEPH L. GREATHOUSE AMD RESEARCH ADVANCED MICRO DEVICES, INC.
FAULTSIM: A FAST, CONFIGURABLE MEMORY-RESILIENCE SIMULATOR DAVID A. ROBERTS, AMD RESEARCH PRASHANT J. NAIR, GEORGIA INSTITUTE OF TECHNOLOGY
Sunpyo Hong, Hyesoon Kim
SIMULATION OF EXASCALE NODES THROUGH RUNTIME HARDWARE MONITORING JOSEPH L. GREATHOUSE, ALEXANDER LYASHEVSKY, MITESH MESWANI, NUWAN JAYASENA, MICHAEL IGNATOWSKI.
SYNCHRONIZATION USING REMOTE-SCOPE PROMOTION MARC S. ORR †§, SHUAI CHE §, AYSE YILMAZER §, BRADFORD M. BECKMANN §, MARK D. HILL †§, DAVID A. WOOD †§ †
IMPLEMENTING A LEADING LOADS PERFORMANCE PREDICTOR ON COMMODITY PROCESSORS BO SU † JOSEPH L. GREATHOUSE ‡ JUNLI GU ‡ MICHAEL BOYER ‡ LI SHEN † ZHIYING.
GPGPU Performance and Power Estimation Using Machine Learning Gene Wu – UT Austin Joseph Greathouse – AMD Research Alexander Lyashevsky – AMD Research.
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association SYSTEM ARCHITECTURE GROUP DEPARTMENT OF COMPUTER.
PPEP: ONLINE PERFORMANCE, POWER, AND ENERGY PREDICTION FRAMEWORK BO SU † JUNLI GU ‡ LI SHEN † WEI HUANG ‡ JOSEPH L. GREATHOUSE ‡ ZHIYING WANG † † NUDT.
What’s going on here? Can you think of a generic way to describe both of these?
Computer Graphics Graphics Hardware
Unit 2 Technology Systems
PPEP: online Performance, power, and energy prediction framework
µC-States: Fine-grained GPU Datapath Power Management
Joseph L. GreathousE, Mayank Daga AMD Research 11/20/2014
for the Offline and Computing groups
Presented by Munezero Immaculee Joselyne PhD in Software Engineering
ATI Stream Computing ACML-GPU – SGEMM Optimization Illustration
Measuring and Modeling On-Chip Interconnect Power on Real Hardware
BLIS optimized for EPYCTM Processors
Assembly Language for Intel-Based Computers, 5th Edition
Structural Simulation Toolkit / Gem5 Integration
5.2 Eleven Advanced Optimizations of Cache Performance
The Small batch (and Other) solutions in Mantle API
Heterogeneous System coherence for Integrated CPU-GPU Systems
Hyperthreading Technology
Many-core Software Development Platforms
Blake A. Hechtman†§, Shuai Che†, Derek R. Hower†, Yingying Tian†Ϯ,
CPU Central Processing Unit
SOC Runtime Gregory Stoner.
libflame optimizations with BLIS
Interference from GPU System Service Requests
Interference from GPU System Service Requests
Machine Learning for Performance and Power Modeling of Heterogeneous Systems Joseph L. Greathouse, Gabriel H. Loh Advanced Micro Devices, Inc.
RegMutex: Inter-Warp GPU Register Time-Sharing
Chapter 1 Introduction.
Machine Learning for Performance and Power Modeling of Heterogeneous Systems Joseph L. Greathouse, Gabriel H. Loh Advanced Micro Devices, Inc.
Software & hardware interaction
Computer Graphics Graphics Hardware
Advanced Micro Devices, Inc.
Jason Stewart (AMD) | Rolando Caloca O. (Epic Games) | 21 March 2018
Presentation transcript:

Simulation of exascale nodes through runtime hardware monitoring Joseph L. Greathouse, alexander lyashevsky, mitesh meswani, nuwan Jayasena, Michael Ignatowski 9/18/2013

Challenges of exascale node research Many design decisions Heterogeneous Cores Stacked Memories Exascale: Huge Design Space to Explore Composition? Size? Speed? Useful? Compute/BW Ratio? Latency? Capacity? Non-Volatile? Thermal Constraints Software Co-Design Heterogeneous Chip Picture from: Sebastien Nussbaum’s “AMD ‘Trinity’ APU,” Hot Chips 2012 Stacked Memory Picture from: Gabe Loh’s “3D-Stacked Memory Architectures for Multi-Core Processors,” ISCA 2008 Thermal Picture from: Indrani Paul’s “Cooperative Boosting: Needy versus Greedy Power Management,” ISCA 2013 Code snippet from the OpenCL version of LULESH. Power Sharing? Heat dissipation? Sprinting? New algorithms? Data placement? Programming models?

Challenges of Exascale Node Research Interesting Question Require Long Runtimes Power and Thermals on a real heterogeneous processor: ~2.5 trillion CPU instructions, ~60 trillion GPU operations Exascale Proxy Applications are Large Large initialization phases, many long iterations Not microbenchmarks Already reduced inputs and computation from real HPC applications Exascale: Huge Execution Times Chart from Indrani Paul’s, “Cooperative Boosting: Needy versus Greedy Power Management,” ISCA 2013 Instruction counts: 6 minutes * 60s/min * 4 cores * ~1.75GHz (A8-4555M) 6 minutes * 60s/min * 6 CUs * 64 floating point units/CU * ~425MHz (Radeon HD 7600G)

Existing Simulators Microarchitecture Sims: e.g. gem5, Multi2Sim, MARSSx86, SESC, GPGPU-Sim Excellent for low-level details. We need these! Too slow for design space explorations: ~60 trillion operations = 1 year of sim time Functional Simulators: e.g. SimNow, Simics, QEMU, etc. Faster than microarchitectural simulators, good for things like access patterns No relation to hardware performance High-Level Simulators: e.g. Sniper, Graphite, CPR Break operations down into timing models, e.g. core interactions, pipeline stalls, etc. Faster, easier to parallelize. Runtimes and complexity still constrained by desire to achieve accuracy. 1 year of SIM time assumes ~2 MIPS for an entire year. SST: Structural Simulation Toolkit CPR: Composable Performance Regression

Trade off individual test accuracy Doctors do not start with: CAT Scan Photo Attribution: Flickr user “Muffet” -- http://www.flickr.com/photos/calliope/361953316/ . Used under CC BY. Chest X-Ray Attribution: Flickr user “Aidan Jones” -- http://www.flickr.com/photos/aidan_jones/1438403889/ Used under CC BY-SA. Microscope Attribution: Flickr user “Elsie esq.” -- http://www.flickr.com/photos/elsie/7080667207/ Used under CC BY.

Fast Simulation Using Hardware Monitoring

High-Level Simulation Methodology Multi-Stage Performance Estimation Process Phase 1 User Application Software Tasks Phase 2 Machine Description Performance, Power, and Thermal Models Simulator Software Stack Test System OS Trace Post-processor Test System APU, CPU, GPU Traces of HW and SW events related to performance and power Performance & Power Estimates

Benefits of multi-step process Many design space changes only need second step Phase 1 User Application Software Tasks Multiple Machine Descriptions Phase 2 Performance, Power, and Thermal Models Simulator Software Stack Test System OS Trace Post-processor Test System APU, CPU, GPU HW & SW Traces Performance & Power Estimates

Performance Scaling a Single-Threaded App Time Current Processor Start Time End Time Gather statistics and performance counters about: Instructions Committed, stall cycles Memory operations, cache misses, etc. Power usage New Runtime Analytical Performance Scaling Model Simulated Processor

Analytic Performance Scaling CPU Performance Scaling: Find stall events using HW perf. counters. Scale based on new machine parameters. Large amount of work in the literature on DVFS performance scaling, interval models.. Some events can be scaled by fiat: “If IPC when not stalled on memory doubled, what would happen to performance?” GPU Performance Scaling: Watch HW perf. counters that indicate work to do, memory usage, and GPU efficiency Scale values based on estimations of parameters to test “Based on what this application did, what will it do on our future machine?” GPU Example: e.g. Entire citation: Jaewoong Sim, Aniruddha Dasgupta, Hyesoon Kim, Richard W. Vuduc: A performance analysis framework for identifying potential benefits in GPGPU applications. At PPoPP 2012.

Other analytic models Cache/Memory Access Model Observe memory access traces using binary instrumentation or hardware Send traces through high-level cache simulation system Find knees in the curve, feed this back to CPU/GPU performance scaling model Power and thermal models Power can be correlated to hardware events or directly measured Scaled to future technology points Any number of thermal models will work at this point Thermal and power models can feed into control algorithms that change system performance This is another HW/SW co-design point. Fast operation is essential.

Scaling Parallel Segments Requires More Info Time CPU0 CPU1 GPU Example when everything except CPU1 gets faster: CPU0 CPU1 Performance Difference GPU

Must reconstruct critical paths Gather program-level relationship between individually scaled segments Use these happens-before relationships to build a legal execution order Gather ordering from library calls like pthread_create() and clWaitForEvents() Can also split segments based on program phases ① ② ⑦ CPU0 ③ ④ ⑥ CPU1 ⑤ GPU

Conclusion Exascale node design space is huge Trade off some accuracy for faster simulation Use analytic models based on information from existing hardware

Research Related Questions

MODSIM Related Questions Major Contributions: A fast, high-level performance, power, and thermal analysis infrastructure Enables large design space exploration and HW/SW co-design with good feedback Limitations: Trace-based simulation has known limitations w/r/t multiple paths of execution, wrong-path operations, etc. It can be difficult and slow to model something if your hardware can’t measure values that are correlated to it. Bigger Picture: Node-level performance model for datacenter/cluster performance modeling First pass model for APU power sharing algorithms. Exascale application co-design Complementary work to broad projects like SST what is the major contribution of your research?   *what are the gaps you identify in the research coverage in your area? *what is the bigger picture for your research area? (i.e., identify synergistic projects, complementary projects in technical sense, etc)  *how do you see cross-pollination across projects funded by different funding agencies? *what is the one thing that would make it easier/possible to leverage/use the results of other projects to further your own research *what  would you like to most see solved/addressed other than what they are working on?

MODSIM Related Questions What is the one thing that would make it easier to leverage the results of other projects to further your own research Theoretical bounds and analytic reasoning behind performance numbers. Even “good enough” guesses may help, vs. only giving the output of a simulator What are important thing to address in future work? Better analytic scaling models. There are a lot in the literature, but many rely on simulation to propose new hardware that would gather correct statistics. It would be great if open source performance monitoring software were better funded, had more people, etc.

Disclaimer & Attribution The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard version changes, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes. AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO ANY PERSON FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. ATTRIBUTION © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo and combinations thereof are trademarks of Advanced Micro Devices, Inc. in the United States and/or other jurisdictions. Other names are for informational purposes only and may be trademarks of their respective owners.

Backup

An Exascale Node Example Question Is 3D-Stacked Memory Beneficial for Application X? Baseline Performance Bandwidth Difference Latency Difference Thermal Model Core Changes due to Heat Simulation Run(s) Performance results based on design point(s) Redesign Modify software to better utilize new hardware