TAUg: Runtime Global Performance Data Access using MPI Kevin A. Huck, Allen D, Malony, Sameer Shende and Alan Morris

Slides:



Advertisements
Similar presentations
K T A U Kernel Tuning and Analysis Utilities Department of Computer and Information Science Performance Research Laboratory University of Oregon.
Advertisements

CoMPI: Enhancing MPI based applications performance and scalability using run-time compression. Rosa Filgueira, David E.Singh, Alejandro Calderón and Jesús.
An Evaluation of a Framework for the Dynamic Load Balancing of Highly Adaptive and Irregular Parallel Applications Kevin J. Barker, Nikos P. Chrisochoides.
Robert Bell, Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science.
Allen D. Malony, Sameer Shende, Alan Morris Department of Computer and Information Science Performance Research.
Sameer Shende Department of Computer and Information Science Neuro Informatics Center University of Oregon Tool Interoperability.
Tools for Engineering Analysis of High Performance Parallel Programs David Culler, Frederick Wong, Alan Mainwaring Computer Science Division U.C.Berkeley.
Nick Trebon, Alan Morris, Jaideep Ray, Sameer Shende, Allen Malony {ntrebon, amorris, Department of.
On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks Bernd Mohr 1, Allen D. Malony 2, Rudi Eigenmann 3 1 Forschungszentrum.
The TAU Performance System: Advances in Performance Mapping Sameer Shende University of Oregon.
Performance Profiling Overhead Compensation for MPI Programs Sameer Shende, Allen D. Malony, Alan Morris, Felix Wolf
Allen D. Malony, Sameer Shende, Robert Bell Department of Computer and Information Science Computational Science Institute, NeuroInformatics.
Kai Li, Allen D. Malony, Robert Bell, Sameer Shende Department of Computer and Information Science Computational.
Customized Dynamic Load Balancing for a Network of Workstations Taken from work done by: Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer.
Sameer Shende, Allen D. Malony Computer & Information Science Department Computational Science Institute University of Oregon.
Instrumentation and Measurement CSci 599 Class Presentation Shreyans Mehta.
Performance Technology for Complex Parallel Systems REFERENCES.
GASP: A Performance Tool Interface for Global Address Space Languages & Libraries Adam Leko 1, Dan Bonachea 2, Hung-Hsun Su 1, Bryan Golden 1, Hans Sherburne.
ADLB Update Recent and Current Adventures with the Asynchronous Dynamic Load Balancing Library Rusty Lusk Mathematics and Computer Science Division Argonne.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
German National Research Center for Information Technology Research Institute for Computer Architecture and Software Technology German National Research.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Chee Wai Lee, Allen D. Malony, Alan Morris Department of Computer and Information Science Performance Research.
Low-Power Wireless Sensor Networks
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
Aroon Nataraj, Matthew Sottile, Alan Morris, Allen D. Malony, Sameer Shende { anataraj, matt, amorris, malony,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.Sampling over the Ranks in each time Step. Sampling also reduces Amt of data (but over Diff. dimension). 9 Scalable Online Parallel Performance Measurement.
Integrated Performance Views in Charm++: Projections meets TAU Scott Biersdorff Allen D. Malony Department Computer and Information Science University.
A Component Infrastructure for Performance and Power Modeling of Parallel Scientific Applications Boyana Norris Argonne National Laboratory Van Bui, Lois.
Scalable Analysis of Distributed Workflow Traces Daniel K. Gunter and Brian Tierney Distributed Systems Department Lawrence Berkeley National Laboratory.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Profile Analysis with ParaProf Sameer Shende Performance Reseaerch Lab, University of Oregon
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 On-line Automated Performance Diagnosis on Thousands of Processors Philip C. Roth Future.
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
A Profiler for a Multi-Core Multi-FPGA System by Daniel Nunes Supervisor: Professor Paul Chow September 30 th, 2008 University of Toronto Electrical and.
OMIS Approach to Grid Application Monitoring Bartosz Baliś Marian Bubak Włodzimierz Funika Roland Wismueller.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
PMI: A Scalable Process- Management Interface for Extreme-Scale Systems Pavan Balaji, Darius Buntinas, David Goodell, William Gropp, Jayesh Krishna, Ewing.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Computing Simulation in Orders Based Transparent Parallelizing Pavlenko Vitaliy Danilovich, Odessa National Polytechnic University Burdeinyi Viktor Viktorovych,
LLNL-PRES DRAFT This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Allen D. Malony Department of Computer and Information Science TAU Performance Research Laboratory University of Oregon Discussion:
April 14, 2004 The Distributed Performance Consultant: Automated Performance Diagnosis on 1000s of Processors Philip C. Roth Computer.
Department of Computer Science MapReduce for the Cell B. E. Architecture Marc de Kruijf University of Wisconsin−Madison Advised by Professor Sankaralingam.
Overview of AIMS Hans Sherburne UPC Group HCS Research Laboratory University of Florida Color encoding key: Blue: Information Red: Negative note Green:
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
Integrated Performance Views in Charm++: Projections meets TAU Scott Biersdorff Allen D. Malony Department Computer and Information Science University.
Performane Analyzer Performance Analysis and Visualization of Large-Scale Uintah Simulations Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
NGS computation services: APIs and.
Online Performance Analysis and Visualization of Large-Scale Parallel Applications Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance Research.
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
A Parallel Communication Infrastructure for STAPL
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Productive Performance Tools for Heterogeneous Parallel Computing
Performance Technology for Scalable Parallel Systems
TAUmon: Scalable Online Performance Data Analysis in TAU
Allen D. Malony, Sameer Shende
Allen D. Malony Computer & Information Science Department
Outline Introduction Motivation for performance mapping SEAA model
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Presentation transcript:

TAUg: Runtime Global Performance Data Access using MPI Kevin A. Huck, Allen D, Malony, Sameer Shende and Alan Morris Performance Research Laboratory Department of Computer and Information Science University of Oregon

EuroPVM/MPI TAUg Motivation  Growing interest in adaptive computations  Decisions at runtime based on dynamic state  Application adapts to current execution conditions  problem state and performance  Scalable parallel performance measurements  Good solutions exist for parallel profiling and tracing  profiling: Dynaprof, mpiP, HPMToolkit, TAU  tracing: KOJAK, Paraver, TAU  Use of performance tools in adaptive applications  Integrate with performance measurement infrastructure  Requires means to access performance data online

EuroPVM/MPI TAUg Need For A Runtime Global Performance View  Performance data collected locally and concurrently  Scalable efficiency dictates “local” measurements  save data in processes or threads (“local context”)  Done without synchronization or central control  Parallel performance state is globally distributed  Logically part of application’s global data space  Offline tools aggregate data after execution  Online use requires access to global performance state  How does an application access performance data?  Abstract interface (view) to global performance state  Extend performance measurement system to collect data

EuroPVM/MPI TAUg Related Work  Performance monitoring  OCM / OMIS [Wismuller1998]  Peridot [Gerndt2002]  Distributed performance consultant [Miller1995, Roth2006]  Computational steering  Falcon [Gu1995]  Echo / PBIO / DataExchange [Eisenhauer1998]  Online performance adaption  Autopilot [Ribler2001]  Active harmony [Tapus2002]  Computational Quality of Service [Norris2004, Ray2004]

EuroPVM/MPI TAUg Solutions Approaches  Any solution needs to take into consideration:  Granularity of global performance data to be accessed  Cost of observation  Approach 1: Build on global file system  Tool dumps performance data to the filesystem  Application processes read from file system  Approach 2: Build on a collection network  Additional threads, processes or daemons (external)  Gather and communicate performance data  MRNet (Paradyn) [Arnold2005] and Supermon [Sottile2005]  Our approach: Build on MPI and TAU (TAUg)

EuroPVM/MPI TAUg Tarzan and Taug “The ape-man's eyes fell upon Taug, the playmate of his childhood, … Tarzan knew, Taug was courageous, and he was young and agile and wonderfully muscled.” Jungle Tales of Tarzan Edgar Rice Burroughs

EuroPVM/MPI TAUg TAUg (TAU global)  TAU global performance space  TAU aggregates local parallel performance measurements  Uses application communication infrastructure (MPI)  Application-level access to global performance data  TAU API lets application create performance views  Application abstractions as basis for performance view  TAU aggregates performance data for a view (internal)  TAUg infrastructure focuses on:  Portability  Scalability  Application-level support

EuroPVM/MPI TAUg TAUg – System Design

EuroPVM/MPI TAUg Global Performance Space Views and Communicators Global performance communicator  subset of MPI processes Global performance view  subset of global profile data

EuroPVM/MPI TAUg TAUg Programming Interface  TAUg uses both TAU and MPI  TAUg method interfaces  Designed in MPI style  Callable from C, C++, and Fortran.  View and communicator definition void static TAU_REGISTER_VIEW() void static TAU_REGISTER_COMMUNICATOR()  Called after MPI_Init()  Get global performance view data void static TAU_GET_VIEW()

EuroPVM/MPI TAUg View and Communicator Registration void static TAU_REGISTER_VIEW( char* tau_event_name, int* new_view_ID ); void static TAU_REGISTER_COMMUNICATOR( int[] member_processes, int number_of_processes, int* new_communicator_id );

EuroPVM/MPI TAUg Registration – Sample Code int viewID = 0, commID = 0, numprocs = 0; /* register a view for the calc method */ TAU_REGISTER_VIEW("calc()", &viewID); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); int members[numprocs]; for (int i = 0 ; i < numprocs ; i++) members[i] = i; /* register a communicator for all processes */ TAU_REGISTER_COMMUNICATOR( members, numprocs, &commID);

EuroPVM/MPI TAUg Registration – Implementation Details  Registering a communicator  Creates a new MPI communicator  Uses process ranks from MPI_COMM_WORLD  Used by TAUg only  TAUg maintains a map  MPI_COMM_WORLD  new communicator rank  Returns a TAUg communicator ID  Not MPI_Comm object reference  Ensures TAUg can support other communication libraries

EuroPVM/MPI TAUg Getting Global Performance Data View void static TAU_GET_VIEW( int viewID, int communicatorID, int type, int sink, double** data, int* outSize );  Choices for type:  TAU_ALL_TO_ONE (requires sink process)  TAU_ONE_TO_ALL (requires source process)  TAU_ALL_TO_ALL (sink parameter ignored)

EuroPVM/MPI TAUg Getting View – Sample Code double *loopTime; int size = 0, myid = 0, sink = 0; MPI_Comm_rank(MPI_COMM_WORLD,&myid); TAU_GET_VIEW(viewID, commID, TAU_ALL_TO_ONE, sink, &loopTime, &size); if (myid == sink) { /* do something with the result... */ }

EuroPVM/MPI TAUg Getting View – Implementation Details  TAUg gathers local performance data  Calls TAU_GET_FUNC_VALS()  Stops all performance measurements for process  Updates profile to current values  Package view data as custom MPI_Datatype  Send to other processors according to type  TAU_ALL_TO_ONE (uses MPI_Gather() )  TAU_ONE_TO_ALL (uses MPI_Scatter() )  TAU_ALL_TO_ALL (uses MPI_Allgather() )

EuroPVM/MPI TAUg Experiment 1: Load Balancing Application  Simulate a heterogeneous cluster of n processors  n/2 are 2x as fast as other n/2 nodes  Situation where external factors affect performance  Application wants to load balance work 1. Load is initially divided evenly 2. After each timestep - application requests a TAUg performance view - redistributes load based on global performance state  Only one TAUg performance view  Application-specific event  Execution time for work

EuroPVM/MPI TAUg Load Balancing Simulation Results ~26% speedup with load balancing at all processor counts 33% optimal distribution

EuroPVM/MPI TAUg Experiment 2: Overhead in Weak Scaling Study  sPPM benchmark (3D gas dynamics)  22 global performance views monitored  One for each application function (not including MPI)  8 equal-sized communicator groups per experiment  1, 2, 4 and 8 processes per group (4 experiments)  Get performance data for all view at each iteration  Run on LLNL node dual P4 2.4 Ghz  Weak scaling study up to 64 processors  Investigate TAUg overhead  Never exceeds 0.1% of the total runtime  seconds of total 111 seconds with 8 processors  seconds of total 115 seconds with 64 processors

EuroPVM/MPI TAUg TAUg Overhead in sPPM Weak Scaling Study TAUg events are not major contributors to total runtime 64 processors percent inclusive times

EuroPVM/MPI TAUg TAUg Overhead in sPPM seconds of total processors

EuroPVM/MPI TAUg Experiment 3: Overhead in Strong Scaling Study  ASCI Sweep3D benchmark (3D neutron transport)  One global performance view exchanged  one event: “SWEEP” (the main calculation routine)  One communicator consisting of all processes  all processes exchange performance data every timestep  Run on LLNL node dual P4 2.4 Ghz  Strong scaling up to 512 processors  Investigate TAUg overhead  Never exceeds 1.3% of the total runtime  seconds of total 77 minutes with 16 processors  2.05 seconds of total 162 seconds with 512 processors

EuroPVM/MPI TAUg TAUg Overhead in Sweep3D TAUg events are not major contributors to total runtime, even when exchanging with 512 processors each timestep 512 processors, strong scaling example, times shown are inclusive percentage

EuroPVM/MPI TAUg TAUg Overhead in Sweep3D 2.05 seconds of total processors

EuroPVM/MPI TAUg Discussion  TAUg currently limits global view access  Single event and single metric (exclusive)  Simplified implementation for view data type  Need flexible view data specification  Multi-view access  Limited view communication types  Only three communication methods  collective with barrier  Can imagine more sophisticated patterns  send and receive with pairwise and non-blocking  No performance data processing

EuroPVM/MPI TAUg Future Work  Provide full access to TAU performance data  Provide better view specific and communication  Develop helper functions  Basic statistics: average, minimum, maximum, …  Relative differentials: between timesteps and processes  Examine one-sided communication in MPI-2  Support non-synchronous exchange of performance data  Integrate in real applications

EuroPVM/MPI TAUg Conclusion  An application may desire to query its runtime performance state to make decisions that direct how the computation will proceed  TAUg designed as an abstraction layer for MPI-based applications to retrieve performance views from the global performance space  TAUg insulates the application developer from complexity involved in exchanging the global performance space  TAUg is as portable as MPI and TAU, and is as scalable as the local MPI implementation.  Overhead is inevitable, but minimized and under the application’s control

EuroPVM/MPI TAUg Support Acknowledgements  Department of Energy (DOE)  Office of Science contracts  University of Utah ASCI Level 1  ASC/NNSA Level 3 contract  Lawrence Livermore National Laboratory  Department of Defense (DoD)  HPC Modernization Office (HPCMO)  Programming Environment and Training (PET)  NSF Software and Tools for High-End Computing  Research Centre Juelich  Los Alamos National Laboratory 