Kernel and Application Code Performance for a Spectral Atmospheric Global Circulation Model on the Cray T3E and IBM SP Patrick H. Worley Computer Science.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

1 Uniform memory access (UMA) Each processor has uniform access time to memory - also known as symmetric multiprocessors (SMPs) (example: SUN ES1000) Non-uniform.
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
1 Advancing Supercomputer Performance Through Interconnection Topology Synthesis Yi Zhu, Michael Taylor, Scott B. Baden and Chung-Kuan Cheng Department.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
GWDG Matrix Transpose Results with Hybrid OpenMP / MPI O. Haan Gesellschaft für wissenschaftliche Datenverarbeitung Göttingen, Germany ( GWDG ) SCICOMP.
Cache Coherent Distributed Shared Memory. Motivations Small processor count –SMP machines –Single shared memory with multiple processors interconnected.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
Understanding Application Scaling NAS Parallel Benchmarks 2.2 on NOW and SGI Origin 2000 Frederick Wong, Rich Martin, Remzi Arpaci-Dusseau, David Wu, and.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Reference: Message Passing Fundamentals.
CS 213 Commercial Multiprocessors. Origin2000 System – Shared Memory Directory state in same or separate DRAMs, accessed in parallel Upto 512 nodes (1024.
Performance Analysis of MPI Communications on the SGI Altix 3700 Nor Asilah Wati Abdul Hamid, Paul Coddington, Francis Vaughan Distributed & High Performance.
CS 284a, 7 October 97Copyright (c) , John Thornley1 CS 284a Lecture Tuesday, 7 October 1997.
Acknowledgments: Thanks to Professor Nicholas Brummell from UC Santa Cruz for his help on FFTs after class, and also thanks to Professor James Demmel from.
PARALLEL PROCESSING The NAS Parallel Benchmarks Daniel Gross Chen Haiout.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Cluster Computing Applications Project Parallelizing BLAST Research Alliance of Minorities.
VTF Applications Performance and Scalability Sharon Brunett CACR/Caltech ASCI Site Review October 28,
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
1 Titanium Review: Ti Parallel Benchmarks Kaushik Datta Titanium NAS Parallel Benchmarks Kathy Yelick U.C. Berkeley September.
1 Lecture 7: Part 2: Message Passing Multicomputers (Distributed Memory Machines)
Network Topologies Topology – how nodes are connected – where there is a wire between 2 nodes. Routing – the path a message takes to get from one node.
Methods  OpenGL Functionality Visualization Tool Functionality 1)3D Shape/Adding Color1)Atom/element representations 2)Blending/Rotation 2)Rotation 3)Sphere.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Scaling to New Heights Retrospective IEEE/ACM SC2002 Conference Baltimore, MD.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference Presented by Valerie Spencer.
Lionel F. Lovett, II Jackson State University Research Alliance in Math and Science Computer Science and Mathematics Division Mentors: George Ostrouchov.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
August 15, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 12: Multiprocessors: Non-Uniform Memory Access * Jeremy R. Johnson.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY 1 Parallel Solution of the 3-D Laplace Equation Using a Symmetric-Galerkin Boundary Integral.
ESMF Performance Evaluation and Optimization Peggy Li(1), Samson Cheung(2), Gerhard Theurich(2), Cecelia Deluca(3) (1)Jet Propulsion Laboratory, California.
Sept COMP60611 Fundamentals of Concurrency Lab Exercise 2 Notes Notes on the finite difference performance model example – for the lab… Graham Riley,
Efficient Data Accesses for Parallel Sequence Searches Heshan Lin (NCSU) Xiaosong Ma (NCSU & ORNL) Praveen Chandramohan (ORNL) Al Geist (ORNL) Nagiza Samatova.
Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences
Case Study in Computational Science & Engineering - Lecture 2 1 Parallel Architecture Models Shared Memory –Dual/Quad Pentium, Cray T90, IBM Power3 Node.
Supercomputing ‘99 Parallelization of a Dynamic Unstructured Application using Three Leading Paradigms Leonid Oliker NERSC Lawrence Berkeley National Laboratory.
Parametric Study of Mechanical Stress in Abdominal Aortic Aneurysms (AAA) Erin A. Lennartz Virginia Polytechnic Institute and State University Research.
1 CCSM Component Performance Benchmarking and Status of the CRAY X1 at ORNL Patrick H. Worley Oak Ridge National Laboratory Computing in Atmospheric Sciences.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
Evaluation of Modern Parallel Vector Architectures Leonid Oliker Future Technologies Group Computational Research Division LBNL
METHODS CT scans were segmented and triangular surface meshes generated using Amira. Antiga and Steinman’s method (2004) for automatically extracting parameterized.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
The Research Alliance in Math and Science program is sponsored by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
CCSM Portability and Performance, Software Engineering Challenges, and Future Targets Tony Craig National Center for Atmospheric Research Boulder, Colorado,
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
CCSM3 / HadCM3 Under predict precipitation rate near equator regions CCSM3 under predicts greater in SE U.S. than HadCM3 Methodology and Results Interpolate.
CCSM Performance, Successes and Challenges Tony Craig NCAR RIST Meeting March 12-14, 2002 Boulder, Colorado, USA.
Interconnection network network interface and a case study.
Outline Why this subject? What is High Performance Computing?
Managed by UT-Battelle for the Department of Energy 1 Decreasing the Artificial Attenuation of the RCSIM Radio Channel Simulation Software Abigail Snyder.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-2.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Today's Software For Tomorrow's Hardware: An Introduction to Parallel Computing Rahul.S. Sampath May 9 th 2007.
Performance Comparison of Winterhawk I and Winterhawk II Systems Patrick H. Worley Computer Science and Mathematics Division Oak Ridge National Laboratory.
Background Computer System Architectures Computer System Software.
Managed by UT-Battelle for the Department of Energy 1 United States Grid Security and Reliability Control in High Load Conditions Presented to Associate.
TEMPLATE DESIGN © H. Che 2, E. D’Azevedo 1, M. Sekachev 3, K. Wong 3 1 Oak Ridge National Laboratory, 2 Chinese University.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Manchester Computing Supercomputing, Visualization & eScience Zoe Chaplin 11 September 2003 CAS2K3 Comparison of the Unified Model Version 5.3 on Various.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Software Practices for a Performance Portable Climate System Model
Department of Computer Science University of California, Santa Barbara
Department of Computer Science University of California, Santa Barbara
Presentation transcript:

Kernel and Application Code Performance for a Spectral Atmospheric Global Circulation Model on the Cray T3E and IBM SP Patrick H. Worley Computer Science and Mathematics Division Oak Ridge National Laboratory NERSC Users’ Group Meeting Oak Ridge, TN June 6, 2000

… random collection of benchmarks, looking at communication, serial, and parallel performance on the IBM SP and other MPPs at NERSC and ORNL. Alternative Title

v Research sponsored by the Atmospheric and Climate Research Division and the Office of Mathematical, Information, and Computational Sciences, Office of Science, U.S. Department of Energy under Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC. v These slides have been authored by a contractor of the U.S. Government under contract No. DE-AC05-00OR Accordingly, the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes v Oak Ridge National Laboratory is managed by UT-Battelle, LLC for the United States Department of Energy under Contract No. DE-AC05- 00OR Acknowledgements

Platforms at NERSC v IBM SP 2-way Winterhawk I SMP “wide” nodes with 1 GB memory 200 MHz Power 3 processors with 4 MB L2 cache 1.6 GB/sec node memory bandwidth (single bus) Omega multistage interconnect v SGI/Cray Research T3E-900 Single processor nodes with 256 MB memory 450 MHz Alpha (EV5) with 96 KB L2 cache 1.2 GB/sec node memory bandwidth 3D torus interconnect

Platforms at ORNL v IBM SP 4-way Winterhawk II SMP “thin” nodes with 2 GB memory 375 MHz Power 3-II processors with 8 MB L2 cache 1.6 GB/sec node memory bandwidth (single bus) Omega multistage interconnect v Compaq AlphaServer SC 4-way ES40 SMP nodes with 2 GB memory 667 MHz Alpha 21264a (EV67) processors with 8 MB L2 cache 5.2 GB/sec node memory bandwidth (dual bus) Quadrics “fat tree” interconnect

Other Platforms v SGI / Cray Research Origin 2000 at LANL 128-way SMP node with 32 GB memory 250 MHz MIPS R10000 processors with 4 MB L2 cache NUMA memory subsystem v IBM SP 16-way Nighthawk II SMP node 375 MHz Power3-II processors with 8 MB L2 cache switch-based memory subsystem Results obtained using prerelease hardware and software

Topics v Interprocessor communication performance v Serial performance PSTSWM spectral dynamics kernel CRM column physics kernel v Parallel performance CCM/MP-2D atmospheric global circulation model

Communication Tests v Interprocessor communication performance within an SMP node between SMP nodes with and without contention with and without cache invalidation for both bidirectional and unidirectional communication protocols v Brief description of some results. For more details, see

Communication Tests MPI_SENDRECV bidirectional and MPI_SEND/MPI_RECV unidirectional bandwidth between nodes on the IBM SP at NERSC

Communication Tests MPI_SENDRECV bidirectional and MPI_SEND/MPI_RECV unidirectional bandwidth between nodes on the IBM SP at NERSC

Communication Tests MPI_SENDRECV bidirectional and MPI_SEND/MPI_RECV unidirectional bandwidth between nodes on the IBM SP at ORNL

Communication Tests Bidirectional bandwidth comparison across platforms: swap between processors 0-1

Communication Tests Bidirectional bandwidth comparison across platforms: swap between processors 0-4

Communication Tests Bidirectional bandwidth comparison across platforms: simultaneous swap between processors 0-4,1-5,2-6,3-7

Communication Tests Bidirectional bandwidth comparison across platforms: 8 processor send/recv ring

Communication Tests v Summary Decent intranode performance is possible. Message passing functionality is good. Switch / NIC performance is limiting in internode communication. Contention for Switch/NIC bandwidth in SMP nodes can be significant.

Serial Performance v Issues Compiler optimization Domain decomposition Memory contention in SMP nodes v Kernel codes PSTSWM - spectral dynamics CRM - column physics

Spectral Dynamics v PSTSWM solves the nonlinear shallow water equations on a sphere using the spectral transform method 99% of floating point operations are fmul, fadd, or fmadd accessing memory linearly, but not much reuse (longitude, vertical, latitude) array index ordering u computation independent between horizontal layers (fixed vertical index) u as vertical dimension size increases, demands on memory increase

Spectral Dynamics PSTSWM on the IBM SP at NERSC Horizontal Resolutions T5: 8x16 T10:16x32 T21:32x64 T42:64x128 T85:128x256 T170:256x512

Spectral Dynamics PSTSWM on the IBM SP at NERSC

Spectral Dynamics PSTSWM Platform comparisons - 1 processor per SMP node Horizontal Resolutions T5: 8x16 T10:16x32 T21:32x64 T42:64x128 T85:128x256 T170:256x512

Spectral Dynamics PSTSWM Platform comparisons - all processors active in SMP node (except Origin-250) Horizontal Resolutions T5: 8x16 T10:16x32 T21:32x64 T42:64x128 T85:128x256 T170:256x512

Spectral Dynamics PSTSWM Platform comparisons - 1 processor per SMP node

Spectral Dynamics PSTSWM Platform comparisons - all processors active in SMP node (except Origin-250)

Spectral Dynamics v Summary Math libraries and relaxed mathematical semantics improve performance significantly on the IBM SP. Node memory bandwidth is important (for this kernel code), especially on bus-based SMP nodes. The IBM SP serial performance is a significant improvement over the (previous generation) Origin and T3E systems.

Column Physics v CRM Column Radiation Model extracted from the Community Climate Model 6% of floating point operations are sqrt, 3% are fdiv exp, log, and pow are among top six most frequently called functions (longitude, vertical, latitude) array index ordering u computations independent between vertical columns (fixed longitude, latitude) u as longitude dimension size increases, demands on memory increase

Column Physics CRM on the NERSC SP longitude-vertical slice, with varying number of longitudes

Column Physics CRM longitude-vertical slice, with varying number of longitudes 1 processor per SMP node

Column Physics v Summary Performance is less sensitive to node memory bandwidth for this kernel code. Performance on the IBM SP is very sensitive to compiler optimization and domain decomposition.

Parallel Performance v Issues Scalability Overhead growth and analysis v Codes CCM/MP-2D

v Message-passing parallel implementation of the National Center for Atmospheric Research (NCAR) Community Climate Model v Computational Domains Physical Domain: Longitude x Latitude x Vertical levels Fourier Domain: Wavenumber x Latitude x Vertical levels Spectral Domain: (Wavenumber x Polynomial degree) x Vertical levels.

CCM/MP-2D v Problem Sizes T42L x 64 x 18 physical domain grid 42 x 64 x 18 Fourier domain grid 946 x 18 spectral domain grid ~59.5 GFlops per simulated day T170L x 256 x 18 physical domain grid 170 x 256 x 18 Fourier domain grid x 18 spectral domain grid ~3231 GFlops per simulated day

CCM/MP-2D v Computations Column Physics u independent between vertical columns Spectral Dynamics u Fourier transform in longitude direction u Legendre transform in latitude direction u tendencies for timestepping calculated in spectral domain, independent between spectral coordinates Semi-Lagrangian Advection u Use local approximations to interpolate wind fields and particle distributions away from grid points.

CCM/MP-2D v Decomposition across latitude u parallelizes the Legendre transform: Use distributed global sum algorithm currently u requires north/south halo updates for semi-Lagrangian advection v Decomposition across longitude u parallelizes the Fourier transform: Either use distributed FFT algorithm or transpose fields and use serial FFT u requires east/west halo updates for semi-Lagrangian advection u requires night/day vertical column swaps to load balance physics

CCM/MP-2D Sensitivity of message volume to domain decomposition

Scalability CCM/MP-2D T42L18 Benchmark

Scalability CCM/MP-2D T170L18 Benchmark

Overhead CCM/MP-2D T42L18 Benchmark Overhead Time Diagnosis

Overhead CCM/MP-2D T170L18 Benchmark Overhead Time Diagnosis

CCM/MP-2D v Summary Parallel algorithm optimization is (still) important for achieving peak performance Bottlenecks u Message-passing bandwidth and latency u SMP node memory bandwidth on the SP