Download presentation
Presentation is loading. Please wait.
Published bySuzanna Bradford Modified over 8 years ago
1
Performance Comparison of Winterhawk I and Winterhawk II Systems Patrick H. Worley Computer Science and Mathematics Division Oak Ridge National Laboratory S CICOM P San Diego Supercomputer Center La Jolla, CA August 14, 2000
2
v Research sponsored by the Atmospheric and Climate Research Division and the Office of Mathematical, Information, and Computational Sciences, Office of Science, U.S. Department of Energy under Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC. v These slides have been authored by a contractor of the U.S. Government under contract No. DE-AC05-00OR22725. Accordingly, the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes v Oak Ridge National Laboratory is managed by UT-Battelle, LLC for the United States Department of Energy under Contract No. DE-AC05- 00OR22725. Acknowledgements
3
Overview v Goal Identify performance (and performance quirks) that users might expect when running applications on both Winterhawk I and Winterhawk II systems. v Outline Serial performance u PSTSWM spectral dynamics kernel u CRM column physics kernel Interprocessor communication performance Parallel performance u CCM/MP-2D atmospheric global circulation model
4
IBM SP Systems v IBM SP at NERSC 2-way Winterhawk I SMP “wide” nodes with 1 GB memory 200 MHz Power 3 processors with 4 MB L2 cache 1.6 GB/sec node memory bandwidth (single bus) Omega multistage interconnect v IBM SP at ORNL 4-way Winterhawk II SMP “thin” nodes with 2 GB memory 375 MHz Power 3-II processors with 8 MB L2 cache 1.6 GB/sec node memory bandwidth (single bus) Omega multistage interconnect
5
IBM SP Systems v IBM SP that used to be at ORNL 8-way Nighthawk I SMP nodes 222 MHz Power 3 processors with 4 MB L2 cache switch-based memory subsystem v IBM SP (at someplace within IBM) 16-way Nighthawk II SMP node 375 MHz Power3-II processors with 8 MB L2 cache switch-based memory subsystem Results obtained using prerelease hardware and software in March 2000
6
Other Platforms v SGI / Cray Research Origin 2000 at LANL 128-way SMP node with 32 GB memory 250 MHz MIPS R10000 processors with 4 MB L2 cache NUMA memory subsystem v SGI/Cray Research T3E-900 Single processor nodes with 256 MB memory 450 MHz Alpha 21164 (EV5) with 96 KB L2 cache 1.2 GB/sec node memory bandwidth 3D torus interconnect
7
Serial Performance v Issues Compiler optimization Domain decomposition Memory contention in SMP nodes v Kernel codes PSTSWM - spectral dynamics CRM - column physics
8
Spectral Dynamics v PSTSWM solves the nonlinear shallow water equations on a sphere using the spectral transform method 99% of floating point operations are fmul, fadd, or fmadd accessing memory linearly, but not much reuse (longitude, vertical, latitude) array index ordering u computation independent between horizontal layers (fixed vertical index) u as vertical dimension size increases, demands on memory increase
9
Spectral Dynamics PSTSWM on the IBM SP at NERSC Horizontal Resolutions T5: 8x16 T10:16x32 T21:32x64 T42:64x128 T85:128x256 T170:256x512
10
Spectral Dynamics PSTSWM on the IBM SP at ORNL Horizontal Resolutions T5: 8x16 T10:16x32 T21:32x64 T42:64x128 T85:128x256 T170:256x512
11
Spectral Dynamics PSTSWM Platform comparisons - 1 processor per SMP node Horizontal Resolutions T5: 8x16 T10:16x32 T21:32x64 T42:64x128 T85:128x256 T170:256x512
12
Spectral Dynamics PSTSWM Platform comparisons - all processors active in SMP node (except Origin-250) Horizontal Resolutions T5: 8x16 T10:16x32 T21:32x64 T42:64x128 T85:128x256 T170:256x512
13
Spectral Dynamics PSTSWM Platform comparisons - 1 processor per SMP node
14
Spectral Dynamics PSTSWM Platform comparisons - all processors active in SMP node (except Origin-250)
15
Spectral Dynamics v Summary for PSTSWM Math libraries and relaxed mathematical semantics improve performance significantly on the IBM SP. The single processor performance for the Winterhawk II can be more than twice that of the Winterhawk I. However, this advantage disappears for large problem sizes that require frequent access to main memory, especially when multiple processors are competing for memory access. The single processor performance for the Winterhawk node is better than that for the analogous Nighthawk node. This advantage can disappear for the Winterhawk II node when multiple processors are competing for memory access.
16
Column Physics v CRM Column Radiation Model extracted from the NCAR Community Climate Model 6% of floating point operations are sqrt, 3% are fdiv exp, log, and pow are among top six most frequently called functions (longitude, vertical, latitude) array index ordering u computations independent between vertical columns (fixed longitude, latitude) u as longitude dimension size increases, demands on memory increase
17
Column Physics CRM on the NERSC SP longitude-vertical slice, with varying number of longitudes
18
Column Physics CRM on the ORNL SP longitude-vertical slice, with varying number of longitudes
19
Column Physics CRM longitude-vertical slice, with varying number of longitudes 1 processor per SMP node except where indicated
20
Column Physics v Summary for CRM Performance on the IBM SP is very sensitive to compiler optimization and domain decomposition. Performance is less sensitive to node memory bandwidth for this kernel code, and Winterhawk II single processor performance is approximately twice that of Winterhawk I for all problem configurations and numbers of processors.
21
Communication Tests v Interprocessor communication performance within an SMP node between SMP nodes with and without contention v Brief description of some results. For more details, see http://www.epm.ornl.gov/~worley/studies/pt2pt.html
22
Communication Tests MPI_SENDRECV bidirectional and MPI_SEND/MPI_RECV unidirectional bandwidth between nodes on the IBM SP at NERSC and ORNL
23
Communication Tests Bidirectional bandwidth : exchange between processors 0-4 Latency Estimates (usecs) SP-200 (W1): 28-48 SP-375 (W2): 21-60
24
Communication Tests Bidirectional bandwidth : exchange between processors 0-1 Latency Estimates (usecs) SP-200 (W1): 13-27 SP-375 (W2): 8-29
25
Communication Tests Bidirectional bandwidth per processor: simultaneous exchange between processors 0-4,1-5,2-6,3-7
26
Communication Tests Bidirectional bandwidth per processor: 8 processor send/recv ring 0-1-2-3-4-5-6-7-0
27
Communication Tests v Summary Bidirectional bandwidth is worth exploiting on both systems. In isolation, bandwidth between processors in separate nodes is higher for Winterhawk II nodes than for Winterhawk I nodes. When all processors in a node are communicating, the per processor bandwidth is twice as large for Winterhawk I nodes. Winterhawk II intranode bandwidth is sensitive to cache “assumptions”.
28
Parallel Performance v Issues Scalability Overhead growth and analysis v Codes CCM/MP-2D
29
v Message-passing parallel implementation of the National Center for Atmospheric Research (NCAR) Community Climate Model v Computational Domains Physical Domain: Longitude x Latitude x Vertical levels Fourier Domain: Wavenumber x Latitude x Vertical levels Spectral Domain: (Wavenumber x Polynomial degree) x Vertical levels.
30
CCM/MP-2D v Problem Sizes T42L18 128 x 64 x 18 physical domain grid 42 x 64 x 18 Fourier domain grid 946 x 18 spectral domain grid ~59.5 GFlops per simulated day T170L18 512 x 256 x 18 physical domain grid 170 x 256 x 18 Fourier domain grid 14706 x 18 spectral domain grid ~3231 GFlops per simulated day
31
CCM/MP-2D v Computations Column Physics u independent between vertical columns Spectral Dynamics u Fourier transform in longitude direction u Legendre transform in latitude direction u tendencies for timestepping calculated in spectral domain, independent between spectral coordinates Semi-Lagrangian Advection u Use local approximations to interpolate wind fields and particle distributions away from grid points.
32
CCM/MP-2D v Decomposition across latitude u parallelizes the Legendre transform: Use distributed global sum algorithm currently u requires north/south halo updates for semi-Lagrangian advection v Decomposition across longitude u parallelizes the Fourier transform: Either use distributed FFT algorithm or transpose fields and use serial FFT u requires east/west halo updates for semi-Lagrangian advection u requires night/day vertical column swaps to load balance physics
33
CCM/MP-2D Sensitivity of message volume to domain decomposition
34
Scalability CCM/MP-2D T42L18 Benchmark
35
Computation Cost CCM/MP-2D T42L18 Benchmark Computation Time
36
Overhead Cost CCM/MP-2D T42L18 Benchmark Overhead Time
37
Overhead CCM/MP-2D T42L18 Benchmark Overhead Time Diagnosis
38
Scalability CCM/MP-2D T170L18 Benchmark
39
Computation Cost CCM/MP-2D T170L18 Benchmark Serial Time
40
Overhead Time CCM/MP-2D T170L18 Benchmark Overhead Time
41
Overhead CCM/MP-2D T170L18 Benchmark Overhead Time Diagnosis
42
CCM/MP-2D v Summary for CCM/MP-2D CCM application is communication intensive for large processor counts, even for large problem sizes. Winterhawk II system is 60-100% faster than Winterhawk I system for this application, even when communication bound. Computation rate comparison between Winterhawk I and Winterhawk II runs agrees with kernel experiments. Point-to-point communication benchmarks do not reflect the advantage of Winterhawk II over Winterhawk I, possibly due to u Contribution of load imbalance to communication costs. u Increase in variability in communication costs with increasing numbers of nodes.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.