Tools for Performance Discovery and Optimization Sameer Shende, Allen D. Malony, Alan Morris, Kevin Huck University of Oregon {sameer, malony, amorris,

Slides:



Advertisements
Similar presentations
Machine Learning-based Autotuning with TAU and Active Harmony Nicholas Chaimov University of Oregon Paradyn Week 2013 April 29, 2013.
Advertisements

Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
Workload Characterization using the TAU Performance System Sameer Shende, Allen D. Malony, Alan Morris University of Oregon {sameer,
Sameer Shende, Allen D. Malony, and Alan Morris {sameer, malony, Steven Parker, and J. Davison de St. Germain {sparker,
Sameer Shende, Allen D. Malony, and Alan Morris {sameer, malony, Department of Computer and Information Science NeuroInformatics.
S3D: Performance Impact of Hybrid XT3/XT4 Sameer Shende
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory University of Oregon Multi-Experiment.
Robert Bell, Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science.
Allen D. Malony, Sameer Shende, Alan Morris Department of Computer and Information Science Performance Research.
Scalability Study of S3D using TAU Sameer Shende
Sameer Shende Department of Computer and Information Science Neuro Informatics Center University of Oregon Tool Interoperability.
Profiling S3D on Cray XT3 using TAU Sameer Shende
TAU: Tuning and Analysis Utilities. TAU Performance System Framework  Tuning and Analysis Utilities  Performance system framework for scalable parallel.
TAU Performance System Sameer Shende, Allen D. Malony University of Oregon {sameer, Workshop Jan 9-10, Classroom 1, T 1889,
The TAU Performance Technology for Complex Parallel Systems (Performance Analysis Bring Your Own Code Workshop, NRL Washington D.C.) Sameer Shende, Allen.
Nick Trebon, Alan Morris, Jaideep Ray, Sameer Shende, Allen Malony {ntrebon, amorris, Department of.
TAU Performance System
On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks Bernd Mohr 1, Allen D. Malony 2, Rudi Eigenmann 3 1 Forschungszentrum.
TAU Performance System Sameer Shende, Allen D. Malony, Alan Morris University of Oregon {sameer, malony, ACTS Workshop, LBNL, Aug.
Case Study: PETSc ex19  Non-linear solver (snes)  2-D driven cavity code  uses velocity-velocity formulation  finite difference discretization on a.
Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science Institute University.
Performance and Memory Evaluation using TAU Sameer Shende, Allen D. Malony, Alan Morris University of Oregon {sameer, malony, Peter.
Workshop on Performance Tools for Petascale Computing 9:30 – 10:30am, Tuesday, July 17, 2007, Snowbird, UT Sameer S. Shende
TAU Performance System Alan Morris, Sameer Shende, Allen D. Malony University of Oregon {amorris, sameer,
Performance Tools BOF, SC’07 5:30pm – 7pm, Tuesday, A9 Sameer S. Shende Performance Research Laboratory University.
Performance Instrumentation and Measurement for Terascale Systems Jack Dongarra, Shirley Moore, Philip Mucci University of Tennessee Sameer Shende, and.
June 2, 2003ICCS Performance Instrumentation and Measurement for Terascale Systems Jack Dongarra, Shirley Moore, Philip Mucci University of Tennessee.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory NeuroInformatics Center University.
Workshop on Performance Tools for Petascale Computing 9:30 – 10:30am, Tuesday, July 17, 2007, Snowbird, UT Sameer S. Shende
Performance Evaluation of S3D using TAU Sameer Shende
TAU: Performance Regression Testing Harness for FLASH Sameer Shende
Scalability Study of S3D using TAU Sameer Shende
Optimization of Instrumentation in Parallel Performance Evaluation Tools Sameer Shende, Allen D. Malony, Alan Morris University of Oregon {sameer,
S3D: Comparing Performance of XT3+XT4 with XT4 Sameer Shende
Performance and Memory Evaluation using the TAU Performance System Sameer Shende, Allen D. Malony, Alan Morris University of Oregon {sameer, malony,
Kai Li, Allen D. Malony, Robert Bell, Sameer Shende Department of Computer and Information Science Computational.
The TAU Performance System Sameer Shende, Allen D. Malony, Robert Bell University of Oregon.
Sameer Shende, Allen D. Malony Computer & Information Science Department Computational Science Institute University of Oregon.
Performance Observation Sameer Shende and Allen D. Malony cs.uoregon.edu.
Performance Tools for Empirical Autotuning Allen D. Malony, Nick Chaimov, Kevin Huck, Scott Biersdorff, Sameer Shende
Paradyn Week – April 14, 2004 – Madison, WI DPOMP: A DPCL Based Infrastructure for Performance Monitoring of OpenMP Applications Bernd Mohr Forschungszentrum.
Score-P – A Joint Performance Measurement Run-Time Infrastructure for Periscope, Scalasca, TAU, and Vampir Alexandru Calotoiu German Research School for.
Integrated Performance Views in Charm++: Projections meets TAU Scott Biersdorff Allen D. Malony Department Computer and Information Science University.
Using TAU on SiCortex Alan Morris, Aroon Nataraj Sameer Shende, Allen D. Malony University of Oregon {amorris, anataraj, sameer,
Profile Analysis with ParaProf Sameer Shende Performance Reseaerch Lab, University of Oregon
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
PerfExplorer Component for Performance Data Analysis Kevin Huck – University of Oregon Boyana Norris – Argonne National Lab Li Li – Argonne National Lab.
Allen D. Malony, Sameer S. Shende, Alan Morris, Robert Bell, Kevin Huck, Nick Trebon, Suravee Suthikulpanit, Kai Li, Li Li
Allen D. Malony Department of Computer and Information Science TAU Performance Research Laboratory University of Oregon Discussion:
Tool Visualizations, Metrics, and Profiled Entities Overview [Brief Version] Adam Leko HCS Research Laboratory University of Florida.
Simplifying the Usage of Performance Evaluation Tools: Experiences with TAU and DyninstAPI Paradyn/Condor Week 2010, Rm 221, Fluno Center, U. of Wisconsin,
21 Sep UPC Performance Analysis Tool: Status and Plans Professor Alan D. George, Principal Investigator Mr. Hung-Hsun Su, Sr. Research Assistant.
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
Integrated Performance Views in Charm++: Projections meets TAU Scott Biersdorff Allen D. Malony Department Computer and Information Science University.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory.
Performane Analyzer Performance Analysis and Visualization of Large-Scale Uintah Simulations Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance.
TAU Performance System (ACTS Workshop LBL) Sameer Shende, Allen D. Malony University of Oregon {sameer,
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Performance Evaluation using the TAU Performance System
Introduction to the TAU Performance System®
Performance Technology for Scalable Parallel Systems
TAU integration with Score-P
TAU: Performance Technology for Productive, High Performance Computing
Allen D. Malony, Sameer Shende
TAU Parallel Performance System
A configurable binary instrumenter
TAU Performance System (ACTS Workshop LBL) Sameer Shende, Allen D
Outline Introduction Motivation for performance mapping SEAA model
Parallel Program Analysis Framework for the DOE ACTS Toolkit
Presentation transcript:

Tools for Performance Discovery and Optimization Sameer Shende, Allen D. Malony, Alan Morris, Kevin Huck University of Oregon {sameer, malony, amorris, M52: Adaptive Tools and Frameworks for High Performance Numerical Computations (3/3) SIAM Parallel Processing Conference, Fri,Feb. 24, 2006, Franciscan Room

TAU Performance SystemSIAM PP’062 Research Motivation  Tools for performance problem solving  Empirical-based performance optimization process  Performance technology concerns characterization Performance Tuning Performance Diagnosis Performance Experimentation Performance Observation hypotheses properties Instrumentation Measurement Analysis Visualization Performance Technology Experiment management Performance storage Performance Technology

TAU Performance SystemSIAM PP’063 Outline of Talk  Overview of TAU  Instrumentation  Measurement  Analysis  Performance data management and data mining  Performance Data Management Framework (PerfDMF)  PerfExplorer  Multi-experiment case studies  Clustering analysis  Future work and concluding remarks

TAU Performance SystemSIAM PP’064 TAU Performance System  Tuning and Analysis Utilities (13+ year project effort)  Performance system framework for HPC systems  Integrated, scalable, flexible, and parallel  Targets a general complex system computation model  Entities: nodes / contexts / threads  Multi-level: system / software / parallelism  Measurement and analysis abstraction  Integrated toolkit for performance problem solving  Instrumentation, measurement, analysis, and visualization  Portable performance profiling and tracing facility  Performance data management and data mining 

TAU Performance SystemSIAM PP’065 Definitions – Profiling  Profiling  Recording of summary information during execution  inclusive, exclusive time, # calls, hardware statistics, …  Reflects performance behavior of program entities  functions, loops, basic blocks  user-defined “semantic” entities  Very good for low-cost performance assessment  Helps to expose performance bottlenecks and hotspots  Implemented through  sampling: periodic OS interrupts or hardware counter traps  instrumentation: direct insertion of measurement code

TAU Performance SystemSIAM PP’066 Definitions – Tracing  Tracing  Recording of information about significant points (events) during program execution  entering/exiting code region (function, loop, block, …)  thread/process interactions (e.g., send/receive message)  Save information in event record  timestamp  CPU identifier, thread identifier  Event type and event-specific information  Event trace is a time-sequenced stream of event records  Can be used to reconstruct dynamic program behavior  Typically requires code instrumentation

TAU Performance SystemSIAM PP’067 TAU Parallel Performance System Goals  Multi-level performance instrumentation  Multi-language automatic source instrumentation  Flexible and configurable performance measurement  Widely-ported parallel performance profiling system  Computer system architectures and operating systems  Different programming languages and compilers  Support for multiple parallel programming paradigms  Multi-threading, message passing, mixed-mode, hybrid  Support for performance mapping  Support for object-oriented and generic programming  Integration in complex software, systems, applications

TAU Performance SystemSIAM PP’068 TAU Performance System Architecture event selection

TAU Performance SystemSIAM PP’069 TAU Performance System Architecture

TAU Performance SystemSIAM PP’0610 Program Database Toolkit (PDT) Application / Library C / C++ parser Fortran parser F77/90/95 C / C++ IL analyzer Fortran IL analyzer Program Database Files IL DUCTAPE PDBhtml SILOON CHASM TAU_instr Program documentation Application component glue C++ / F90/95 interoperability Automatic source instrumentation

TAU Performance SystemSIAM PP’0611 TAU Instrumentation Approach  Support for standard program events  Routines  Classes and templates  Statement-level blocks  Support for user-defined events  Begin/End events (“user-defined timers”)  Atomic events (e.g., size of memory allocated/freed)  Selection of event statistics  Support definition of “semantic” entities for mapping  Support for event groups  Instrumentation optimization (eliminate instrumentation in lightweight routines)

TAU Performance SystemSIAM PP’0612 TAU Instrumentation  Flexible instrumentation mechanisms at multiple levels  Source code  manual (TAU API, TAU Component API)  automatic C, C++, F77/90/95 (Program Database Toolkit (PDT)) OpenMP (directive rewriting (Opari), POMP spec)  Object code  pre-instrumented libraries (e.g., MPI using PMPI)  statically-linked and dynamically-linked  Executable code  dynamic instrumentation (pre-execution) (DynInstAPI)  virtual machine instrumentation (e.g., Java using JVMPI)  Proxy Components

TAU Performance SystemSIAM PP’0613 Using TAU – A tutorial  Configuration  Instrumentation  Manual  MPI – Wrapper interposition library  PDT- Source rewriting for C,C++, F77/90/95  OpenMP – Directive rewriting  Component based instrumentation – Proxy components  Binary Instrumentation  DyninstAPI – Runtime Instrumentation/Rewriting binary  Java – Runtime instrumentation  Python – Runtime instrumentation  Measurement  Performance Analysis

TAU Performance SystemSIAM PP’0614 TAU Measurement System Configuration  configure [OPTIONS]  {-c++=, -cc= } Specify C++ and C compilers  {-pthread, -sproc}Use pthread or SGI sproc threads  -openmpUse OpenMP threads  -jdk= Specify Java instrumentation (JDK)  -opari= Specify location of Opari OpenMP tool  -papi= Specify location of PAPI  -pdt= Specify location of PDT  -dyninst= Specify location of DynInst Package  -mpi[inc/lib]= Specify MPI library instrumentation  -shmem[inc/lib]= Specify PSHMEM library instrumentation  -python[inc/lib]= Specify Python instrumentation  -epilog= Specify location of EPILOG  -slog2[= ]Specify location of SLOG2/Jumpshot  -vtf= Specify location of VTF3 trace package  -arch= Specify architecture explicitly (bgl,ibm64,ibm64linux…)

TAU Performance SystemSIAM PP’0615 TAU Measurement System Configuration  configure [OPTIONS]  -TRACEGenerate binary TAU traces  -PROFILE (default) Generate profiles (summary)  -PROFILECALLPATHGenerate call path profiles  -PROFILEPHASEGenerate phase based profiles  -PROFILEMEMORYTrack heap memory for each routine  -PROFILEHEADROOMTrack memory headroom to grow  -MULTIPLECOUNTERSUse hardware counters + time  -COMPENSATECompensate timer overhead  -CPUTIMEUse usertime+system time  -PAPIWALLCLOCKUse PAPI’s wallclock time  -PAPIVIRTUALUse PAPI’s process virtual time  -SGITIMERSUse fast IRIX timers  -LINUXTIMERSUse fast x86 Linux timers

TAU Performance SystemSIAM PP’0616 TAU Measurement Configuration – Examples ./configure -c++=xlC_r –pthread  Use TAU with xlC_r and pthread library under AIX  Enable TAU profiling (default) ./configure -TRACE –PROFILE  Enable both TAU profiling and tracing ./configure -c++=xlC_r -cc=xlc_r -papi=/usr/local/packages/papi -pdt=/usr/local/pdtoolkit-3.4 –arch=ibm64 -mpiinc=/usr/lpp/ppe.poe/include -mpilib=/usr/lpp/ppe.poe/lib -MULTIPLECOUNTERS  Use IBM’s xlC_r and xlc_r compilers with PAPI, PDT, MPI packages and multiple counters for measurements  Typically configure multiple measurement libraries  Each configuration creates a unique /lib/Makefile.tau- stub makefile that corresponds to the configuration options specified. e.g.,  /usr/local/tau/tau /x86_64/lib/Makefile.tau-icpc-mpi-pdt  /usr/local/tau/tau /x86_64/lib/Makefile.tau-icpc-mpi-pdt-trace

TAU Performance SystemSIAM PP’0617 TAU_SETUP: A GUI for Installing TAU tau-2.x>./tau_setup

TAU Performance SystemSIAM PP’0618 Configuration Parameters in Stub Makefiles  Each TAU Stub Makefile resides in /lib directory  Variables:  TAU_CXXSpecify the C++ compiler used by TAU  TAU_CC, TAU_F90Specify the C, F90 compilers  TAU_DEFSDefines used by TAU. Add to CFLAGS  TAU_LDFLAGSLinker options. Add to LDFLAGS  TAU_INCLUDEHeader files include path. Add to CFLAGS  TAU_LIBSStatically linked TAU library. Add to LIBS  TAU_SHLIBSDynamically linked TAU library  TAU_MPI_LIBSTAU’s MPI wrapper library for C/C++  TAU_MPI_FLIBSTAU’s MPI wrapper library for F90  TAU_FORTRANLIBSMust be linked in with C++ linker for F90  TAU_CXXLIBSMust be linked in with F90 linker  TAU_INCLUDE_MEMORYUse TAU’s malloc/free wrapper lib  TAU_DISABLETAU’s dummy F90 stub library  TAU_COMPILERInstrument using tau_compiler.sh script Note: Not including TAU_DEFS in CFLAGS disables instrumentation in C/C++ programs (TAU_DISABLE for f90).

TAU Performance SystemSIAM PP’0619 Manual Instrumentation – C++ Example #include int main(int argc, char **argv) { TAU_PROFILE(“int main(int, char **)”, “ ”, TAU_DEFAULT); TAU_PROFILE_INIT(argc, argv); TAU_PROFILE_SET_NODE(0); /* for sequential programs */ foo(); return 0; } int foo(void) { TAU_PROFILE(“int foo(void)”, “ ”, TAU_DEFAULT); // measures entire foo() TAU_PROFILE_TIMER(t, “foo(): for loop”, “[23:45 file.cpp]”, TAU_USER); TAU_PROFILE_START(t); for(int i = 0; i < N ; i++){ work(i); } TAU_PROFILE_STOP(t); // other statements in foo … }

TAU Performance SystemSIAM PP’0620 Manual Instrumentation – F90 Example cc34567 Cubes program – comment line PROGRAM SUM_OF_CUBES integer profiler(2) save profiler INTEGER :: H, T, U call TAU_PROFILE_INIT() call TAU_PROFILE_TIMER(profiler, 'PROGRAM SUM_OF_CUBES') call TAU_PROFILE_START(profiler) call TAU_PROFILE_SET_NODE(0) ! This program prints all 3-digit numbers that ! equal the sum of the cubes of their digits. DO H = 1, 9 DO T = 0, 9 DO U = 0, 9 IF (100*H + 10*T + U == H**3 + T**3 + U**3) THEN PRINT "(3I1)", H, T, U ENDIF END DO call TAU_PROFILE_STOP(profiler) END PROGRAM SUM_OF_CUBES

TAU Performance SystemSIAM PP’0621 TAU’s MPI Wrapper Interposition Library  Uses standard MPI Profiling Interface  Provides name shifted interface  MPI_Send = PMPI_Send  Weak bindings  Interpose TAU’s MPI wrapper library between MPI and TAU  -lmpi replaced by –lTauMpi –lpmpi –lmpi  No change to the source code! Just re-link the application to generate performance data  setenv TAU_MAKEFILE / /lib/Makefile.tau-mpi- [options]  Use tau_cxx.sh, tau_f90.sh and tau_cc.sh as compilers

TAU Performance SystemSIAM PP’0622 Using TAU  Install TAU % configure ; make clean install  Typically modify application makefile  Change the name of compiler to tau_cxx.sh, tau_f90.sh  Set environment variables  Name of the stub makefile: TAU_MAKEFILE  Options passed to tau_compiler.sh: TAU_OPTIONS  Execute application % mpirun –np a.out;  Analyze performance data  paraprof, vampir, paraver, jumpshot …

TAU Performance SystemSIAM PP’0623 Using Program Database Toolkit (PDT) 1. Parse the Program to create foo.pdb: % cxxparse foo.cpp –I/usr/local/mydir –DMYFLAGS … or % cparse foo.c –I/usr/local/mydir –DMYFLAGS … or % f95parse foo.f90 –I/usr/local/mydir … % f95parse *.f –omerged.pdb –I/usr/local/mydir –R free 2. Instrument the program: % tau_instrumentor foo.pdb foo.f90 –o foo.inst.f90 –f select.tau 3. Compile the instrumented program: % ifort foo.inst.f90 –c –I/usr/local/mpi/include –o foo.o

TAU Performance SystemSIAM PP’0624 Using TAU Step 1: Configure and install TAU: % configure -pdt= -mpiinc= -mpilib= -c++=icpc -cc=icc -fortran=intel % make clean; make install Builds / /lib/Makefile.tau- % set path=($path / /bin) Step 2: Choose target stub Makefile % setenv TAU_MAKEFILE /san/cca/tau/tau /x86_64/lib/Makefile.tau-icpc-mpi-pdt % setenv TAU_OPTIONS ‘-optVerbose -optKeepFiles’ (see tau_compiler.sh for all options) Step 3: Use tau_f90.sh, tau_cxx.sh and tau_cc.sh as the F90, C++ or C compilers respectively. % tau_f90.sh -c app.f90 % tau_f90.sh app.o -o app -lm -lblas Or use these in the application Makefile.

TAU Performance SystemSIAM PP’0625 AutoInstrumentation using TAU_COMPILER  $(TAU_COMPILER) stub Makefile variable in release  Invokes PDT parser, TAU instrumentor, compiler through tau_compiler.sh shell script  Requires minimal changes to application Makefile  Compilation rules are not changed  User sets TAU_MAKEFILE and TAU_OPTIONS environment variables  User renames the compilers  F90=xlf90 to  F90= tau_f90.sh  Passes options from TAU stub Makefile to the four compilation stages  Uses original compilation command if an error occurs

TAU Performance SystemSIAM PP’0626 Tau_[cxx,cc,f90].sh – Improves Integration in Makefiles # set TAU_MAKEFILE and TAU_OPTIONS env vars CXX = tau_cxx.sh F90 = tau_f90.sh CFLAGS = LIBS = -lm OBJS = f1.o f2.o f3.o … fn.o app: $(OBJS) $(CXX) $(LDFLAGS) $(OBJS) -o $(LIBS).cpp.o: $(CC) $(CFLAGS) -c $<

TAU Performance SystemSIAM PP’0627 Using Stub Makefile and TAU_COMPILER include /usr/common/acts/TAU/tau /rs6000/lib/ Makefile.tau-mpi-pdt-trace MYOPTIONS= -optVerbose –optKeepFiles F90 = $(TAU_COMPILER) $(MYOPTIONS) mpxlf90 OBJS = f1.o f2.o f3.o … LIBS = -Lappdir –lapplib1 –lapplib2 … app: $(OBJS) $(F90) $(OBJS) –o app $(LIBS).f90.o: $(F90) –c $<

TAU Performance SystemSIAM PP’0628 TAU_COMPILER Options  Optional parameters for $(TAU_COMPILER): [tau_compiler.sh –help]  -optVerboseTurn on verbose debugging messages  -optPdtDir="" PDT architecture directory. Typically $(PDTDIR)/$(PDTARCHDIR)  -optPdtF95Opts="" Options for Fortran parser in PDT (f95parse)  -optPdtCOpts="" Options for C parser in PDT (cparse). Typically $(TAU_MPI_INCLUDE) $(TAU_INCLUDE) $(TAU_DEFS)  -optPdtCxxOpts="" Options for C++ parser in PDT (cxxparse). Typically $(TAU_MPI_INCLUDE) $(TAU_INCLUDE) $(TAU_DEFS)  -optPdtF90Parser="" Specify a different Fortran parser. For e.g., f90parse instead of f95parse  -optPdtUser="" Optional arguments for parsing source code  -optPDBFile="" Specify [merged] PDB file. Skips parsing phase.  -optTauInstr="" Specify location of tau_instrumentor. Typically $(TAUROOT)/$(CONFIG_ARCH)/bin/tau_instrumentor  -optTauSelectFile="" Specify selective instrumentation file for tau_instrumentor  -optTau="" Specify options for tau_instrumentor  -optCompile="" Options passed to the compiler. Typically $(TAU_MPI_INCLUDE) $(TAU_INCLUDE) $(TAU_DEFS)  -optLinking="" Options passed to the linker. Typically $(TAU_MPI_FLIBS) $(TAU_LIBS) $(TAU_CXXLIBS)  -optNoMpi Removes -l*mpi* libraries during linking (default)  -optKeepFiles Does not remove intermediate.pdb and.inst.* files e.g., % setenv TAU_OPTIONS ‘-optTauSelectFile=select.tau – optVerbose -optPdtCOpts=“-I/home -DFOO” ’ % tau_cxx.sh matrix.cpp -o matrix -lm

TAU Performance SystemSIAM PP’0629 Instrumentation Specification % tau_instrumentor Usage : tau_instrumentor [-o ] [-noinline] [-g groupname] [-i headerfile] [-c|-c++|-fortran] [-f ] For selective instrumentation, use –f option % tau_instrumentor foo.pdb foo.cpp –o foo.inst.cpp –f selective.dat % cat selective.dat # Selective instrumentation: Specify an exclude/include list of routines/files. BEGIN_EXCLUDE_LIST void quicksort(int *, int, int) void sort_5elements(int *) void interchange(int *, int *) END_EXCLUDE_LIST BEGIN_FILE_INCLUDE_LIST Main.cpp Foo?.c *.C END_FILE_INCLUDE_LIST # Instruments routines in Main.cpp, Foo?.c and *.C files only # Use BEGIN_[FILE]_INCLUDE_LIST with END_[FILE]_INCLUDE_LIST

TAU Performance SystemSIAM PP’0630 Automatic Outer Loop Level Instrumentation BEGIN_INSTRUMENT_SECTION loops file="loop_test.cpp" routine="multiply" # it also understands # as the wildcard in routine name # and * and ? wildcards in file name. # You can also specify the full # name of the routine as is found in profile files. #loops file="loop_test.cpp" routine="double multiply#" END_INSTRUMENT_SECTION % pprof NODE 0;CONTEXT 0;THREAD 0: %Time Exclusive Inclusive #Call #Subrs Inclusive Name msec total msec usec/call , int main(int, char **) , double multiply() ,778 22, Loop: double multiply()[ file = line,col = to ] 9.3 2,345 2, Loop: double multiply()[ file = line,col = to ] Loop: double multiply()[ file = line,col = to ]

TAU Performance SystemSIAM PP’0631 Optimization of Program Instrumentation  Need to eliminate instrumentation in frequently executing lightweight routines  Throttling of events at runtime: % setenv TAU_THROTTLE 1 Turns off instrumentation in routines that execute over times (TAU_THROTTLE_NUMCALLS) and take less than 10 microseconds of inclusive time per call (TAU_THROTTLE_PERCALL)  Selective instrumentation file to filter events % tau_instrumentor [options] –f  Compensation of local instrumentation overhead % configure -COMPENSATE

TAU Performance SystemSIAM PP’0632 TAU_REDUCE  Reads profile files and rules  Creates selective instrumentation file  Specifies which routines should be excluded from instrumentation tau_reduce rules profile Selective instrumentation file

TAU Performance SystemSIAM PP’0633 Building Bridges to Other Tools: TAU

TAU Performance SystemSIAM PP’0634 TAU Performance System Interfaces  PDT [U. Oregon, LANL, FZJ] for instrumentation of C++, C99, F95 source code  PAPI [UTK] & PCL[FZJ] for accessing hardware performance counters data  DyninstAPI [U. Maryland, U. Wisconsin] for runtime instrumentation  KOJAK [FZJ, UTK]  Epilog trace generation library  CUBE callgraph visualizer  Opari OpenMP directive rewriting tool  Vampir/Intel® Trace Analyzer [Pallas/Intel]  VTF3 trace generation library for Vampir [TU Dresden] (available from TAU website)  Paraver trace visualizer [CEPBA]  Jumpshot-4 trace visualizer [MPICH, ANL]  JVMPI from JDK for Java program instrumentation [Sun]  Paraprof profile browser/PerfDMF database supports:  TAU format  Gprof [GNU]  HPM Toolkit [IBM]  MpiP [ORNL, LLNL]  Dynaprof [UTK]  PSRun [NCSA]  PerfDMF database can use Oracle, MySQL or PostgreSQL (IBM DB2 support planned)

TAU Performance SystemSIAM PP’0635 PAPI [UTK]  Performance Application Programming Interface  The purpose of the PAPI project is to design, standardize and implement a portable and efficient API to access the hardware performance monitor counters found on most modern microprocessors.  Parallel Tools Consortium project  University of Tennessee, Knoxville 

TAU Performance SystemSIAM PP’0636 TAU: An Overview  Instrumentation  Measurement  Analysis

TAU Performance SystemSIAM PP’0637 Profile Measurement – Three Flavors  Flat profiles  Time (or counts) spent in each routine (nodes in callgraph).  Exclusive/inclusive time, no. of calls, child calls  E.g,: MPI_Send, foo, …  Callpath Profiles  Flat profiles, plus  Sequence of actions that led to poor performance  Time spent along a calling path (edges in callgraph)  E.g., “main=> f1 => f2 => MPI_Send” shows the time spent in MPI_Send when called by f2, when f2 is called by f1, when it is called by main. Depth of this callpath = 4 (TAU_CALLPATH_DEPTH environment variable)  Phase based profiles  Flat profiles, plus  Flat profiles under a phase (nested phases are allowed)  Default “main” phase has all phases and routines invoked outside phases  Supports static or dynamic (per-iteration) phases  E.g., “IO => MPI_Send” is time spent in MPI_Send in IO phase

TAU Performance SystemSIAM PP’0638 TAU Timers and Phases  Static timer  Shows time spent in all invocations of a routine (foo)  E.g., “foo()” 100 secs, 100 calls  Dynamic timer  Shows time spent in each invocation of a routine  E.g., “foo() 3” 4.5 secs, “foo 10” 2 secs (invocations 3 and 10 respectively)  Static phase  Shows time spent in all routines called (directly/indirectly) by a given routine (foo)  E.g., “foo() => MPI_Send()” 100 secs, 10 calls shows that a total of 100 secs were spent in MPI_Send() when it was called by foo.  Dynamic phase  Shows time spent in all routines called by a given invocation of a routine.  E.g., “foo() 4 => MPI_Send()” 12 secs, shows that 12 secs were spent in MPI_Send when it was called by the 4 th invocation of foo.

TAU Performance SystemSIAM PP’0639 Static Timers in TAU SUBROUTINE SUM_OF_CUBES integer profiler(2) save profiler INTEGER :: H, T, U call TAU_PROFILE_TIMER(profiler, 'SUM_OF_CUBES') call TAU_PROFILE_START(profiler) ! This program prints all 3-digit numbers that ! equal the sum of the cubes of their digits. DO H = 1, 9 DO T = 0, 9 DO U = 0, 9 IF (100*H + 10*T + U == H**3 + T**3 + U**3) THEN PRINT "(3I1)", H, T, U ENDIF END DO call TAU_PROFILE_STOP(profiler) END SUBROUTINE SUM_OF_CUBES

TAU Performance SystemSIAM PP’0640 Static Phases and Timers SUBROUTINE FOO integer profiler(2) save profiler call TAU_PHASE_CREATE_STATIC(profiler, ‘foo') call TAU_PHASE_START(profiler) call bar() !Here bar calls MPI_Barrier and we evaluate foo=>MPI_Barrier and foo=>bar call TAU_PHASE_STOP(profiler) END SUBROUTINE SUM_OF_CUBES SUBROUTINE BAR integer profiler(2) save profiler call TAU_PROFILE_TIMER(profiler, ‘bar’) call TAU_PROFILE_START(profiler) call MPI_Barrier() call TAU_PROFILE_STOP(profiler) END SUBROUTINE BAR

TAU Performance SystemSIAM PP’0641 Dynamic Phases SUBROUTINE ITERATE(IER, NIT) IMPLICIT NONE INTEGER IER, NIT character(11) taucharary integer tauiteration / 0 / integer profiler(2) / 0, 0 / save profiler, tauiteration write (taucharary, '(a8,i3)') 'ITERATE ', tauiteration ! Taucharary is the name of the phase e.g.,‘ITERATION 23’ tauiteration = tauiteration + 1 call TAU_PHASE_CREATE_DYNAMIC(profiler,taucharary) call TAU_PHASE_START(profiler) IER = 0 call SOLVE_K_EPSILON_EQ(IER) ! Other work call TAU_PHASE_STOP(profiler)

TAU Performance SystemSIAM PP’0642 TAU’s ParaProf Profile Browser: Static Timers

TAU Performance SystemSIAM PP’0643 Dynamic Timers

TAU Performance SystemSIAM PP’0644 Static Phases MPI_Barrier took 4.85 secs out of secs in the DTM Phase

TAU Performance SystemSIAM PP’0645 Dynamic Phases The first iteration was expensive for INT_RTE. It took secs. Other iterations took less time – 14.2, 10.5, 10.3, 10.5 seconds

TAU Performance SystemSIAM PP’0646 Dynamic Phases Time spent in MPI_Barrier, MPI_Recv,… in DTM ITERATION 1 Breakdown of time spent in MPI_Isend based on its static and dynamic parent phases

TAU Performance SystemSIAM PP’0647 Advances in TAU Performance Analysis  Enhanced parallel profile analysis (ParaProf)  Callpath analysis integration in ParaProf  Event callgraph view  Performance Data Management Framework (PerfDMF)  First release of prototype  Integration with Vampir Next Generation (VNG)  Online trace analysis  3D Performance visualization  Component performance modeling and QoS

TAU Performance SystemSIAM PP’0648 Pprof – Flat Profile (NAS PB LU)  Intel Linux cluster  F90 + MPICH  Profile - Node - Context - Thread  Events - code - MPI  Metric - time  Text display

TAU Performance SystemSIAM PP’0649 Terminology – Example  For routine “int main( )”:  Exclusive time  =10 secs  Inclusive time  100 secs  Calls  1 call  Subrs (no. of child routines called)  3  Inclusive time/call  100secs int main( ) { /* takes 100 secs */ f1(); /* takes 20 secs */ f2(); /* takes 50 secs */ f1(); /* takes 20 secs */ /* other work */ } /* Time can be replaced by counts from PAPI e.g., PAPI_FP_INS. */

TAU Performance SystemSIAM PP’0650 ParaProf – Manager Window performance database derived performance metrics

TAU Performance SystemSIAM PP’0651 Performance Database: Storage of MetaData

TAU Performance SystemSIAM PP’0652 ParaProf – Full Profile (Miranda) 8K processors!

TAU Performance SystemSIAM PP’0653 ParaProf– Flat Profile (Miranda)

TAU Performance SystemSIAM PP’0654 ParaProf– Callpath Profile (Flash)

TAU Performance SystemSIAM PP’0655 Gprof Style Callpath View in Paraprof (SAGE)

TAU Performance SystemSIAM PP’0656 ParaProf – Phase Profile (MFIX) In 51 st iteration, time spent in MPI_Waitall was secs Total time spent in MPI_Waitall was secs across all 92 iterations dynamic phases one per interation

TAU Performance SystemSIAM PP’0657 ParaProf - Statistics Table (Uintah)

TAU Performance SystemSIAM PP’0658 ParaProf – Histogram View (Miranda) 8k processors 16k processors  Scalable 2D displays

TAU Performance SystemSIAM PP’0659 ParaProf –Callgraph View (MFIX)

TAU Performance SystemSIAM PP’0660 ParaProf – Callpath Highlighting (Flash) MODULEHYDRO_1D:HYDRO_1D

TAU Performance SystemSIAM PP’0661 ParaProf – 3D Full Profile (Miranda) 16k processors

TAU Performance SystemSIAM PP’0662 ParaProf Bar Plot (Zoom in/out +/-)

TAU Performance SystemSIAM PP’0663 ParaProf – 3D Scatterplot (Miranda)  Each point is a “thread” of execution  A total of four metrics shown in relation  ParaVis 3D profile visualization library  JOGL

TAU Performance SystemSIAM PP’0664 Important Questions for Application Developers  How does performance vary with different compilers?  Is poor performance correlated with certain OS features?  Has a recent change caused unanticipated performance?  How does performance vary with MPI variants?  Why is one application version faster than another?  What is the reason for the observed scaling behavior?  Did two runs exhibit similar performance?  How are performance data related to application events?  Which machines will run my code the fastest and why?  Which benchmarks predict my code performance best?

TAU Performance SystemSIAM PP’0665 Performance Problem Solving Goals  Answer questions at multiple levels of interest  Data from low-level measurements and simulations  use to predict application performance  High-level performance data spanning dimensions  machine, applications, code revisions, data sets  examine broad performance trends  Discover general correlations application performance and features of their external environment  Develop methods to predict application performance on lower-level metrics  Discover performance correlations between a small set of benchmarks and a collection of applications that represent a typical workload for a given system

TAU Performance SystemSIAM PP’0666 PerfDMF: Performance Data Mgmt. Framework

TAU Performance SystemSIAM PP’0667 ParaProf Performance Profile Analysis HPMToolkit MpiP TAU Raw files PerfDMF managed (database) Metadata Application Experiment Trial

TAU Performance SystemSIAM PP’0668 PerfExplorer  Performance knowledge discovery framework  Use the existing TAU infrastructure  TAU instrumentation data, PerfDMF  Client-server based system architecture  Data mining analysis applied to parallel performance data  comparative, clustering, correlation, dimension reduction,...  Technology integration  Relational DatabaseManagement Systems (RDBMS)  Java API and toolkit  R-project / Omegahat statistical analysis  WEKA data mining package  Web-based client

TAU Performance SystemSIAM PP’0669 PerfExplorer Architecture

TAU Performance SystemSIAM PP’0670 PerfExplorer Client GUI

TAU Performance SystemSIAM PP’0671 Hierarchical and K-means Clustering (sPPM)

TAU Performance SystemSIAM PP’0672 Miranda Clustering on 16K Processors

TAU Performance SystemSIAM PP’0673 PERC Tool Requirements and Evaluation  Performance Evaluation Research Center (PERC)  DOE SciDAC  Evaluation methods/tools for high-end parallel systems  PERC tools study (led by ORNL, Pat Worley)  In-depth performance analysis of select applications  Evaluation performance analysis requirements  Test tool functionality and ease of use  Applications  Start with fusion code – GYRO  Repeat with other PERC benchmarks  Continue with SciDAC codes

TAU Performance SystemSIAM PP’0674 Primary Evaluation Machines  Phoenix (ORNL – Cray X1)  512 multi-streaming vector processors  Ram (ORNL – SGI Altix (1.5 GHz Itanium2))  256 total processors  TeraGrid  ~7,738 total processors on 15 machines at 9 sites  Cheetah (ORNL – p690 cluster (1.3 GHz, HPS))  864 total processors on 27 compute nodes  Seaborg (NERSC – IBM SP3)  6080 total processors on 380 compute nodes

TAU Performance SystemSIAM PP’0675 GYRO Execution Parameters  Three benchmark problems  B1-std: 16n processors, 500 timesteps  B2-cy: 16n processors, 1000 timesteps  B3-gtc: 64n processors, 100 timesteps (very large)  Test different methods to evaluate nonlinear terms:  Direct method  FFT (“nl2” for B1 and B2, “nl1” for B3)  Task affinity enabled/disabled (p690 only)  Memory affinity enabled/disabled (p690 only)  Filesystem location (Cray X1 only)

TAU Performance SystemSIAM PP’0676 PerfExplorer Analysis of Self-Instrumented Data  PerfExplorer  Focus on comparative analysis  Apply to PERC tool evaluation study  Look at user timer data  Aggregate data  no per process data  process clustering analysis is not applicable  Timings output every N timesteps  some phase analysis possible  Goal  Recreate manually generated performance reports

TAU Performance SystemSIAM PP’0677 PerfExplorer Interface Select experiments and trials of interest Data organized in application, experiment, trial structure (will allow arbitrary in future) Experiment metadata

TAU Performance SystemSIAM PP’0678 PerfExplorer Interface Select analysis

TAU Performance SystemSIAM PP’0679 B1-std B3-gtc Timesteps per Second  Cray X1 is the fastest to solution in all 3 tests  FFT (nl2) improves time for B3-gtc only  TeraGrid faster than p690 for B1-std?  Plots generated automatically B1-std B2-cy B3-gtc TeraGrid

TAU Performance SystemSIAM PP’0680 Relative Efficiency (B1-std)  By experiment (B1-std)  Total runtime (Cheetah (red))  By event for one experiment  Coll_tr (blue) is significant  By experiment for one event  Shows how Coll_tr behaves for all experiments 16 processor base case CheetahColl_tr

TAU Performance SystemSIAM PP’0681 Current and Future Work  Vampir/VNG  Generation of OTF traces natively in TAU  ParaProf  Developing timestamped profile snapshot performance displays  PerfDMF  Adding new database backends and distributed support  Building support for user-created tables  PerfExplorer  Extending comparative and clustering analysis  Adding new data mining capabilities  Building in scripting support  Performance regression testing tool (PerfRegress)  Integrate in Eclipse Parallel Tool Project (PTP)

TAU Performance SystemSIAM PP’0682 Concluding Discussion  Performance tools must be used effectively  More intelligent performance systems for productive use  Evolve to application-specific performance technology  Deal with scale by “full range” performance exploration  Autonomic and integrated tools  Knowledge-based and knowledge-driven process  Performance observation methods do not necessarily need to change in a fundamental sense  More automatically controlled and efficiently use  Develop next-generation tools and deliver to community  Open source with support by ParaTools, Inc. 

TAU Performance SystemSIAM PP’0683 Support Acknowledgements  Department of Energy (DOE)  Office of Science contracts  University of Utah ASC Level 1 sub-contract  LLNL ASC/NNSA Level 3 contract  LLNL ParaTools/GWT contract  PET HPCMO, DoD  T.U. Dresden, GWT  Dr. Wolfgang Nagel and Holger Brunst  Research Centre Juelich  Dr. Bernd Mohr  Los Alamos National Laboratory contracts