Allen D. Malony, Sameer S. Shende, Robert Bell Kai Li, Li Li, Kevin Huck Department of Computer.

Slides:



Advertisements
Similar presentations
Machine Learning-based Autotuning with TAU and Active Harmony Nicholas Chaimov University of Oregon Paradyn Week 2013 April 29, 2013.
Advertisements

Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
Workload Characterization using the TAU Performance System Sameer Shende, Allen D. Malony, Alan Morris University of Oregon {sameer,
Sameer Shende, Allen D. Malony, and Alan Morris {sameer, malony, Steven Parker, and J. Davison de St. Germain {sparker,
ARCS Data Analysis Software An overview of the ARCS software management plan Michael Aivazis California Institute of Technology ARCS Baseline Review March.
Robert Bell, Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science.
Sameer Shende Department of Computer and Information Science Neuro Informatics Center University of Oregon Tool Interoperability.
Profiling S3D on Cray XT3 using TAU Sameer Shende
TAU: Tuning and Analysis Utilities. TAU Performance System Framework  Tuning and Analysis Utilities  Performance system framework for scalable parallel.
The TAU Performance Technology for Complex Parallel Systems (Performance Analysis Bring Your Own Code Workshop, NRL Washington D.C.) Sameer Shende, Allen.
Nick Trebon, Alan Morris, Jaideep Ray, Sameer Shende, Allen Malony {ntrebon, amorris, Department of.
On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks Bernd Mohr 1, Allen D. Malony 2, Rudi Eigenmann 3 1 Forschungszentrum.
Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science Institute University.
The TAU Performance System: Advances in Performance Mapping Sameer Shende University of Oregon.
TAU Performance System Alan Morris, Sameer Shende, Allen D. Malony University of Oregon {amorris, sameer,
Performance Tools BOF, SC’07 5:30pm – 7pm, Tuesday, A9 Sameer S. Shende Performance Research Laboratory University.
Performance Instrumentation and Measurement for Terascale Systems Jack Dongarra, Shirley Moore, Philip Mucci University of Tennessee Sameer Shende, and.
Allen D. Malony Department of Computer and Information Science Computational Science Institute University of Oregon TAU Performance.
June 2, 2003ICCS Performance Instrumentation and Measurement for Terascale Systems Jack Dongarra, Shirley Moore, Philip Mucci University of Tennessee.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory NeuroInformatics Center University.
TAU Parallel Performance System DOD UGC 2004 Tutorial Part 1: TAU Overview and Architecture.
Performance Evaluation of S3D using TAU Sameer Shende
TAU: Performance Regression Testing Harness for FLASH Sameer Shende
TAU Performance Toolkit (WOMPAT 2004 OpenMP Lab) Sameer Shende, Allen D. Malony University of Oregon {sameer,
Scalability Study of S3D using TAU Sameer Shende
Allen D. Malony, Sameer Shende, Robert Bell Department of Computer and Information Science Computational Science Institute, NeuroInformatics.
Kai Li, Allen D. Malony, Robert Bell, Sameer Shende Department of Computer and Information Science Computational.
The TAU Performance System Sameer Shende, Allen D. Malony, Robert Bell University of Oregon.
Sameer Shende, Allen D. Malony Computer & Information Science Department Computational Science Institute University of Oregon.
Performance Observation Sameer Shende and Allen D. Malony cs.uoregon.edu.
Performance Technology for Complex Parallel Systems REFERENCES.
SC’01 Tutorial Nov. 7, 2001 TAU Performance System Framework  Tuning and Analysis Utilities  Performance system framework for scalable parallel and distributed.
Allen D. Malony Performance Research Laboratory (PRL) Neuroinformatics Center (NIC) Department.
Paradyn Week – April 14, 2004 – Madison, WI DPOMP: A DPCL Based Infrastructure for Performance Monitoring of OpenMP Applications Bernd Mohr Forschungszentrum.
A Component Infrastructure for Performance and Power Modeling of Parallel Scientific Applications Boyana Norris Argonne National Laboratory Van Bui, Lois.
Using TAU on SiCortex Alan Morris, Aroon Nataraj Sameer Shende, Allen D. Malony University of Oregon {amorris, anataraj, sameer,
Profile Analysis with ParaProf Sameer Shende Performance Reseaerch Lab, University of Oregon
SciDAC All Hands Meeting, March 2-3, 2005 Northwestern University PIs:Alok Choudhary, Wei-keng Liao Graduate Students:Avery Ching, Kenin Coloma, Jianwei.
SvPablo Evaluation Report Hans Sherburne, Adam Leko UPC Group HCS Research Laboratory University of Florida Color encoding key: Blue: Information Red:
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
Center for Component Technology for Terascale Simulation Software CCA is about: Enhancing Programmer Productivity without sacrificing performance. Supporting.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
Allen D. Malony, Sameer S. Shende, Alan Morris, Robert Bell, Kevin Huck, Nick Trebon, Suravee Suthikulpanit, Kai Li, Li Li
Allen D. Malony Department of Computer and Information Science TAU Performance Research Laboratory University of Oregon Discussion:
Tool Visualizations, Metrics, and Profiled Entities Overview [Brief Version] Adam Leko HCS Research Laboratory University of Florida.
1 SciDAC High-End Computer System Performance: Science and Engineering Jack Dongarra Innovative Computing Laboratory University of Tennesseehttp://
Connections to Other Packages The Cactus Team Albert Einstein Institute
 Programming - the process of creating computer programs.
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory.
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
Performane Analyzer Performance Analysis and Visualization of Large-Scale Uintah Simulations Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance.
Online Performance Analysis and Visualization of Large-Scale Parallel Applications Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance Research.
Shirley Moore Towards Scalable Cross-Platform Application Performance Analysis -- Tool Goals and Progress Shirley Moore
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Introduction to the TAU Performance System®
Performance Technology for Scalable Parallel Systems
TAU integration with Score-P
Tutorial Outline – Part 1
Allen D. Malony, Sameer Shende
TAU Parallel Performance System
Performance Technology for Complex Parallel and Distributed Systems
A configurable binary instrumenter
TAU Parallel Performance System
TAU: A Framework for Parallel Performance Analysis
Outline Introduction Motivation for performance mapping SEAA model
Allen D. Malony, Sameer Shende
Parallel Program Analysis Framework for the DOE ACTS Toolkit
TAU Performance DataBase Framework (PerfDBF)
Presentation transcript:

Allen D. Malony, Sameer S. Shende, Robert Bell Kai Li, Li Li, Kevin Huck Department of Computer and Information Science Performance Research Laboratory University of Oregon TAU Parallel Performance System

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Outline  Motivation  TAU architecture and toolkit  Instrumentation  Measurement  Analysis  Example applications  Users of TAU  Conclusion

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Problem Domain  ASCI defines leading edge parallel systems and software  Large-scale systems and heterogenous platforms  Multi-model simulation  Complex software integration  Multi-language programming  Mixed-model parallelism  Complexity challenges performance analysis tools  System diversity requires portable tools  Need for cross-language support  Coverage of parallel computation models  Operate at scale

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Research Motivation  Tools for performance problem solving  Empirical-based performance optimization process  Performance technology concerns characterization Performance Tuning Performance Diagnosis Performance Experimentation Performance Observation hypotheses properties Instrumentation Measurement Analysis Visualization Performance Technology Experiment management Performance database Performance Technology

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Performance System  Tuning and Analysis Utilities (11+ year project effort)  Performance system framework for scalable parallel and distributed high-performance computing  Targets a general complex system computation model  nodes / contexts / threads  Multi-level: system / software / parallelism  Measurement and analysis abstraction  Integrated toolkit for performance instrumentation, measurement, analysis, and visualization  Portable performance profiling and tracing facility  Open software approach with technology integration  University of Oregon, Forschungszentrum Jülich, LANL

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Performance Systems Goals  Multi-level performance instrumentation  Multi-language automatic source instrumentation  Flexible and configurable performance measurement  Widely-ported parallel performance profiling system  Computer system architectures and operating systems  Different programming languages and compilers  Support for multiple parallel programming paradigms  Multi-threading, message passing, mixed-mode, hybrid  Support for performance mapping  Support for object-oriented and generic programming  Integration in complex software systems and applications

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, General Complex System Computation Model  Node: physically distinct shared memory machine  Message passing node interconnection network  Context: distinct virtual memory space within node  Thread: execution threads (user/system) in context memory Node VM space Context SMP Threads node memory … … Interconnection Network Inter-node message communication * * physical view model view

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Performance System Architecture EPILOG Paraver

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Instrumentation Approach  Support for standard program events  Routines  Classes and templates  Statement-level blocks  Support for user-defined events  Begin/End events (“user-defined timers”)  Atomic events  Selection of event statistics  Support definition of “semantic” entities for mapping  Support for event groups  Instrumentation optimization

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Instrumentation  Flexible instrumentation mechanisms at multiple levels  Source code  manual  automatic C, C++, F77/90/95 (Program Database Toolkit (PDT)) OpenMP (directive rewriting (Opari), POMP spec)  Object code  pre-instrumented libraries (e.g., MPI using PMPI)  statically-linked and dynamically-linked  Executable code  dynamic instrumentation (pre-execution) (DynInstAPI)  virtual machine instrumentation (e.g., Java using JVMPI)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Source Instrumentation  Automatic source instrumentation (TAUinstr)  Routine entry/exit and class method entry/exit  Block entry/exit and statement level (to be added)  Uses an instrumentation specification file  Include/exclude list for events and files  Uses command line options for group selection  Instrumentation event selection (TAUselect)  Automatic generation of instrumentation specification file  Instrumentation language to describe event constraints  Event identity and location  Event performance properties (e.g., overhead analysis)  Create TAUselect scripts for performance experiments

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Multi-Level Instrumentation  Targets common measurement interface  TAU API  Multiple instrumentation interfaces  Simultaneously active  Information sharing between interfaces  Utilizes instrumentation knowledge between levels  Selective instrumentation  Available at each level  Cross-level selection  Targets a common performance model  Presents a unified view of execution  Consistent performance events

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Program Database Toolkit (PDT)  Program code analysis framework  develop source-based tools  High-level interface to source code information  Integrated toolkit for source code parsing, database creation, and database query  Commercial grade front-end parsers  Portable IL analyzer, database format, and access API  Open software approach for tool development  Multiple source languages  Implement automatic performance instrumentation tools  tau_instrumentor

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Program Database Toolkit (PDT) Application / Library C / C++ parser Fortran parser F77/90/95 C / C++ IL analyzer Fortran IL analyzer Program Database Files IL DUCTAPE PDBhtml SILOON CHASM TAU_instr Program documentation Application component glue C++ / F90/95 interoperability Automatic source instrumentation

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, PDT 3.0 Functionality  C++ statement-level information implementation  for, while loops, declarations, initialization, assignment…  PDB records defined for most constructs  DUCTAPE  Processes PDB 1.x, 2.x, 3.x uniformly  PDT applications  XMLgen  PDB to XML converter (Sottile)  Used for CHASM and CCA tools  PDBstmt  Statement callgraph display tool

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, PDT 3.0 Functionality (continued)  Cleanscape Flint parser fully integrated for F90/95  Flint parser is very robust  Produces PDB records for TAU instrumentation (stage 1)  Linux x86, HP Tru64, IBM AIX  Tested on SAGE, POP, ESMF, PET benchmarking codes  Full PDB 2.0 specification (stage 2) [Q1 ‘04]  Statement level support (stage 3) [Q3 ‘04]  Open64 parser integrated in PDT for F90/95  Barbara Chapman, University of Houston  Generate full PDB 2.0 specification (stage 2) [Q2 ‘04]  Statement level support (stage 3) [Q3 ‘04]  PDT 3.0 release at SC2003

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Performance Measurement  TAU supports profiling and tracing measurement  Robust timing and hardware performance support  Support for online performance monitoring  Profile and trace performance data export to file system  Selective exporting  Extension of TAU measurement for multiple counters  Creation of user-defined TAU counters  Access to system-level metrics  Support for callpath measurement  Integration with system-level performance data  Linux MAGNET/MUSE (Wu Feng, LANL)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Measurement with Multiple Counters  Extend event measurement to capture multiple metrics  Begin/end (interval) events  User-defined (atomic) events  Multiple performance data sources can be queried  Associate counter function list to event  Defined statically or dynamically  Different counter sources  Timers and hardware counters  User-defined counters (application specified)  System-level counters  Monotonically increasing required for begin/end events  Extend user-defined counters to system-level counter

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Measurement  Performance information  Performance events  High-resolution timer library (real-time / virtual clocks)  General software counter library (user-defined events)  Hardware performance counters  PCL (Performance Counter Library) (ZAM, Germany)  PAPI (Performance API) (UTK, Ptools Consortium)  consistent, portable API  Organization  Node, context, thread levels  Profile groups for collective events (runtime selective)  Performance data mapping between software levels

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Measurement Options  Parallel profiling  Function-level, block-level, statement-level  Supports user-defined events  TAU parallel profile data stored during execution  Hardware counts values  Support for multiple counters  Support for callgraph and callpath profiling  Tracing  All profile-level events  Inter-process communication events  Trace merging and format conversion

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Grouping Performance Data in TAU  Profile Groups  A group of related routines forms a profile group  Statically defined  TAU_DEFAULT, TAU_USER[1-5], TAU_MESSAGE, TAU_IO, …  Dynamically defined  group name based on string, such as “adlib” or “particles”  runtime lookup in a map to get unique group identifier  uses tau_instrumentor to instrument  Ability to change group names at runtime  Group-based instrumentation and measurement control

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Analysis  Parallel profile analysis  Pprof  parallel profiler with text-based display  ParaProf  Graphical, scalable, parallel profile analysis and display  Trace analysis and visualization  Trace merging and clock adjustment (if necessary)  Trace format conversion (ALOG, SDDF, VTF, Paraver)  Trace visualization using Vampir (Pallas)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Pprof Output (NAS Parallel Benchmark – LU)  Intel Quad PIII Xeon  F90 + MPICH  Profile - Node - Context - Thread  Events - code - MPI

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, ParaProf (NAS Parallel Benchmark – LU) node,context, threadGlobal profiles Routine profile across all nodes Event legend Individual profile

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU + PAPI (NAS Parallel Benchmark – LU )  Floating point operations  Re-link to alternate library  Can use multiple counter support

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU + Vampir (NAS Parallel Benchmark – LU) Timeline display Callgraph display Parallelism display Communications display

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Performance System Status  Computing platforms (selected)  IBM SP / pSeries, SGI Origin 2K/3K, Cray T3E / SV-1 / X1, HP (Compaq) SC (Tru64), Sun, Hitachi SR8000, NEC SX-5/6, Linux clusters (IA-32/64, Alpha, PPC, PA- RISC, Power, Opteron), Apple (G4/5, OS X), Windows  Programming languages  C, C++, Fortran 77/90/95, HPF, Java, OpenMP, Python  Thread libraries  pthreads, SGI sproc, Java,Windows, OpenMP  Compilers (selected)  Intel KAI (KCC, KAP/Pro), PGI, GNU, Fujitsu, Sun, Microsoft, SGI, Cray, IBM (xlc, xlf), Compaq, NEC, Intel

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Selected Applications of TAU  Center for Simulation of Accidental Fires and Explosion  University of Utah, ASCI ASAP Center, C-SAFE  Uintah Computational Framework (UCF) (C++)  Center for Simulation of Dynamic Response of Materials  California Institute of Technology, ASCI ASAP Center  Virtual Testshock Facility (VTF) (Python, Fortran 90)  Los Alamos National Lab  Monte Carlo transport (MCNP) (Susan Post)  Full code automatic instrumentation and profiling  ASCI Q validation and scaling  SAIC’s Adaptive Grid Eulerian (SAGE) (Jack Horner)  Fortran 90 automatic instrumentation and profiling

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Selected Applications of TAU (continued)  Lawrence Livermore National Lab  Radiation diffusion (KULL)  C++ automatic instrumentation, callpath profiling  Sandia National Lab  DOE CCTTSS SciDAC project  Common component architecture (CCA) integration  Combustion code (C++, Fortran 90)  Flash Center  University of Chicago / Argonne, ASCI ASAP Center  FLASH code (C, Fortran 90)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Performance Analysis and Visualization  Analysis of parallel profile and trace measurement  Parallel profile analysis  ParaProf  ParaVis  Profile generation from trace data  Performance database framework (PerfDBF)  Parallel trace analysis  Translation to VTF 3.0 and EPILOG  Integration with VNG (Technical University of Dresden)  Online parallel analysis and visualization

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, ParaProf Framework Architecture  Portable, extensible, and scalable tool for profile analysis  Try to offer “best of breed” capabilities to analysts  Build as profile analysis framework for extensibility

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Profile Manager Window  Structured AMR toolkit (SAMRAI++), LLNL

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Full Profile Window (Exclusive Time) 512 processes

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Node / Context / Thread Profile Window

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Derived Metrics

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Full Profile Window (Metric-specific) 512 processes

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, ParaProf Enhancements  Readers completely separated from the GUI  Access to performance profile database  Profile translators  mpiP, papiprof, dynaprof  Callgraph display  prof/gprof style with hyperlinks  Integration of 3D performance plotting library  Scalable profile analysis  Statistical histograms, cluster analysis, …  Generalized programmable analysis engine  Cross-experiment analysis

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, ParaVis Performance Visualizer Performance Analyzer Performance Data Reader  Scalable parallel profile analysis  Scalable performance displays  3D graphics  Analysis across profile samples  Allow for runtime use  Animated / interactive visualization  Initially develop with SCIRun  Computational environment  Performance graphics toolkit  Portable plotting library  OpenGL

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Performance Visualization in SCIRun SCIRun program EVH1, IBM EVH1, Linux IA-32

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, “Terrain” Visualization (Full profile) F Uintah

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, “Scatterplot” Visualization  Each point coordinate determined by three values: MPI_Reduce MPI_Recv MPI_Waitsome  Min/Max value range  Effective for cluster analysis Uintah

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, “Bargraph” Visualization (MPI routines) Uintah, 512 processes, ASCI Blue Pacific

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Empirical-Based Performance Optimization characterization Performance Tuning Performance Diagnosis Performance Experimentation Performance Observation hypotheses properties observability requirements ? Process Experiment Schemas Experiment Trials Experiment management

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Performance Database Framework Performance analysis programs Performance analysis and query toolkit  profile data only  XML representation  project / experiment / trial PerfDML translators... ORDB PostgreSQL PerfDB Performance data description Raw performance data Other tools

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, PerfDBF Components  Performance Data Meta Language (PerfDML)  Common performance data representation  Performance meta-data description  PerfDML translators to common data representation  Performance DataBase (PerfDB)  Standard database technology (SQL)  Free, robust database software (PostgresSQL, MySQL)  Commonly available APIs  Performance DataBase Toolkit (PerfDBT)  Commonly used modules for query and analysis  PerfDB API to facilitate analysis tool development

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, PerfDBF Browser

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, PerfDBF Cross-Trial Analysis

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, TAU Application (Selected)  SAMRAI (LLNL)  Overture (LLNL)  C-SAFE (ASCI ASAP, University of Utah)  VTF (ASCI ASAP, Caltech)  SAGE (ASCI LANL)  POOMA, POOMA-II (LANL, Code Sourcery)  PETSc (ANL)  CCA (DOE SciDAC)  GrACE (Rutgers University)  DOE ACTS toolkit  Aurora / SCALEA (University of Vienna)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Work in Progress  Trace visualization  Event traces with counters (Vampir 3.0 will visualize)  EPILOG trace conversion  Runtime performance monitoring and analysis  Online performance data access  Performance analysis and visualization in SCIRun  Performance Database Framework  XML parallel profile representation of TAU profiles  PostgresSQL performance database  Next-generation PDT  Performance analysis for component software (CCA)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Concluding Remarks  Complex software and parallel computing systems pose challenging performance analysis problems that require robust methodologies and tools  To build more sophisticated performance tools, existing proven performance technology must be utilized  Performance tools must be integrated with software and systems models and technology  Performance engineered software  Function consistently and coherently in software and system environments  TAU performance system offers robust performance technology that can be broadly integrated … so USE IT!

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Acknowledgements  Department of Energy (DOE)  MICS office  DOE 2000 ACTS contract  “Performance Technology for Tera-class Parallel Computer Systems: Evolution of the TAU Performance System”  PERC SciDAC project affiliate  University of Utah DOE ASCI Level 1 sub-contract  DOE ASCI Level 3 (LANL, LLNL)  NSF National Young Investigator (NYI) award  Research Centre Juelich  John von Neumann Institute for Computing  Dr. Bernd Mohr  Los Alamos National Laboratory

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Case Study: SAMRAI (LLNL)  Structured Adaptive Mesh Refinement Application Infrastructure (SAMRAI)  Programming  C++ and MPI  SPMD  Instrumentation  PDT for automatic instrumentation of routines  MPI interposition wrappers  SAMRAI timers for interesting code segments  timers classified in groups (apps, mesh, …)  timer groups are managed by TAU groups

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, SAMRAI (Profile)  Euler (2D) return type routine name

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, SAMRAI Euler (Profile)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, SAMRAI Euler (Trace)

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, Case Study: EVH1  Enhanced Virginia Hydrodynamics #1 (EVH1)  "TeraScale Simulations of Neutrino-Driven Supernovae and Their Nucleosynthesis" SciDAC project  Configured to run a simulation of the Sedov-Taylor blast wave solution in 2D spherical geometry  Performance study found EVH1 communication bound for more than 64 processors  Predominant routine (>50% of execution time) at this scale is MPI_ALLTOALL  Used in matrix transpose-like operations

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, EVH1 Execution Profile

The TAU Performance SystemCray Briefing, SC2002, Nov. 18, EVH1 Execution Trace MPI_Alltoall is an execution bottleneck