Download presentation
Presentation is loading. Please wait.
Published byScot Cannon Modified over 9 years ago
2
August 12, 2004 UCRL-PRES-206265
3
2 Aug-12-2004 Outline l Motivation l About the Applications l Statistics Gathered l Inferences l Future Work
4
3 Aug-12-2004 Motivation l Info for App developers –Information on the expense of basic MPI functions (recode?) –Set expectations l Many tradeoffs available in MPI design –Memory allocation decisions –Protocol cutoff point decisions –Where is additional code complexity worth it? l Information on MPI Usage is scarce l New tools (e.g. mpiP) make profiling reasonable –Easy to incorporate (no source code changes) –Easy to interpret –Unobtrusive observation (little performance impact)
5
4 Aug-12-2004 About the applications… Amtran Ares Ardra Geodyne IRS: Mdcask: Linpack/HPL: Miranda: Smg: Spheral Sweep3d Umt2k: Amtran: discrete coordinate neutron transport Ares: instability 3-D simulation in massive star supernova envelopes Ardra: neutron transport/radiation diffusion code exploring new numerical algorithms and methods for the solution of the Boltzmann Transport Equation (e.g. nuclear imaging). Geodyne: eulerian adaptive mesh refinement (e.g. comet-earth impacts) IRS: solves the radiation transport equation by the flux-limiting diffusion approximation using an implicit matrix solution Mdcask: molecular dynamics codes for study in radiation damage in metals Linpack/HPL: solves a random dense linear system. Miranda: hydrodynamics code simulating instability growth Smg: a parallel semicoarsening multigrid solver for the linear systems arising from finite difference, volume, or finite element discretizations Spheral: provides a steerable parallel environment for performing coupled hydrodynamical & gravitational numerical simulations http://sourceforge.net/projects/spheral Sweep3d: solves a 1-group neuron transport problem Umt2k: photon transport code for unstructured meshes
6
5 Aug-12-2004 Percent of time to MPI Overall for sampled: 60% MPI 40% remaining app
7
6 Aug-12-2004 Top MPI Point-to-Point Calls
8
7 Aug-12-2004 Top MPI Collective Calls
9
8 Aug-12-2004 Comparing Collective and Point-to-Point
10
9 Aug-12-2004 Average Number of Calls for Most Common MPI Functions “Large” Runs
11
10 Aug-12-2004 Communication Patterns most dominant msgsize
12
11 Aug-12-2004 Communication Patterns (continued)
13
12 Aug-12-2004 Frequency of callsites by MPI functions
14
13 Aug-12-2004 Scalability
15
14 Aug-12-2004 Observations Summary l General –People seem to scale code to ~60% MPI/communication –Isend/Irecv/Wait many times more prevalent than Sendrecv and blocking send/recv –Time spent in collectives predominantly divided among barrier, allreduce, broadcast, gather, and alltoall –Most common msgsize is typically between 1K and 1MB l Surprises –Waitany most prevalent call –Almost all pt2pt messages are the same size within a run –Often, message size decreases with large runs –Some codes driven by alltoall performance
16
15 Aug-12-2004 Future Work & Concluding Remarks l Further understanding of apps needed –Results for other test configurations –When can apps make better use of collectives –Mpi-io usage info needed –Classified applications l Acknowledgements mpiP is due to Jeffrey Vetter and Chris Chambreau http://www.llnl.gov/CASC/mpip This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.