Climate-Weather modeling studies Using a Prototype Global Cloud-System Resolving Model Zhi Liang (GFDL/DRC)

Slides:



Advertisements
Similar presentations
MPI Message Queue Debugging Interface Chris Gottbrath Director, Product Management.
Advertisements

Barcelona Supercomputing Center. The BSC-CNS objectives: R&D in Computer Sciences, Life Sciences and Earth Sciences. Supercomputing support to external.
Weather Research & Forecasting: A General Overview
Presented by Dealing with the Scale Problem Innovative Computing Laboratory MPI Team.
Introductions to Parallel Programming Using OpenMP
WRF Modeling System V2.0 Overview
Priority Research Direction Key challenges General Evaluation of current algorithms Evaluation of use of algorithms in Applications Application of “standard”
PGAS Language Update Kathy Yelick. PGAS Languages: Why use 2 Programming Models when 1 will do? Global address space: thread may directly read/write remote.
GFDL’s unified regional-global weather and climate modeling system is designed for all temporal-spatial scales Examples: Regional cloud-resolving Radiative-Convective.
Parallel/Concurrent Programming on the SGI Altix Conley Read January 25, 2007 UC Riverside, Department of Computer Science.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Performance Engineering and Debugging HPC Applications David Skinner
Overview of Eclipse Parallel Tools Platform Adam Leko UPC Group HCS Research Laboratory University of Florida Color encoding key: Blue: Information Red:
Towards Advanced Understanding and Predictive Capability of Climate Change in the Arctic using a High-Resolution Regional Arctic Climate System Model (RAMC)
Basis Light-Front Quantization: a non-perturbative approach for quantum field theory Xingbo Zhao With Anton Ilderton, Heli Honkanen, Pieter Maris, James.
NSF NCAR | NASA GSFC | DOE LANL ANL | NOAA NCEP GFDL | MIT Adoption and field tests of M.I.T General Circulation Model (MITgcm) with ESMF Chris Hill ESMF.
1 Babak Behzad, Yan Liu 1,2,4, Eric Shook 1,2, Michael P. Finn 5, David M. Mattli 5 and Shaowen Wang 1,2,3,4 Babak Behzad 1,3, Yan Liu 1,2,4, Eric Shook.
1 Discussions on the next PAAP workshop, RIKEN. 2 Collaborations toward PAAP Several potential topics : 1.Applications (Wave Propagation, Climate, Reactor.
Introduction to the WRF Modeling System Wei Wang NCAR/MMM.
Slide 1 Auburn University Computer Science and Software Engineering Scientific Computing in Computer Science and Software Engineering Kai H. Chang Professor.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Experimenting with Complex Event Processing for Large Scale Internet Services Monitoring Stephan Grell, Olivier Nano Microsoft, Ritter Strasse 23, Aachen,
ESMF Development Status and Plans ESMF 4 th Community Meeting Cecelia DeLuca July 21, 2005 Climate Data Assimilation Weather.
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
Computational Design of the CCSM Next Generation Coupler Tom Bettge Tony Craig Brian Kauffman National Center for Atmospheric Research Boulder, Colorado.
Process Management Working Group Process Management “Meatball” Dallas November 28, 2001.
Mathematics and Computer Science & Environmental Research Divisions ARGONNE NATIONAL LABORATORY Regional Climate Simulation Analysis & Vizualization John.
Experience with COSMO MPI/OpenMP hybrid parallelization Matthew Cordery, William Sawyer Swiss National Supercomputing Centre Ulrich Schättler Deutscher.
Co-Array Fortran Open-source compilers and tools for scalable global address space computing John Mellor-Crummey Rice University.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
Efficient Data Accesses for Parallel Sequence Searches Heshan Lin (NCSU) Xiaosong Ma (NCSU & ORNL) Praveen Chandramohan (ORNL) Al Geist (ORNL) Nagiza Samatova.
1 Monday, 26 October 2015 © Crown copyright Met Office Computing Update Paul Selwood, Met Office.
Stochastic optimization of energy systems Cosmin Petra Argonne National Laboratory.
Nuclear structure and reactions Nicolas Michel University of Tennessee.
Petascale –LLNL Appro AMD: 9K processors [today] –TJ Watson Blue Gene/L: 40K processors [today] –NY Blue Gene/L: 32K processors –ORNL Cray XT3/4 : 44K.
Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16.
CSC 7600 Lecture 28 : Final Exam Review Spring 2010 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANS FINAL EXAM REVIEW Daniel Kogler, Chirag Dekate.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
© 2002 Barton P. MillerMarch 4, 2001Tool Dæmon Protocol The Tool Dæmon Protocol: Using Monitoring Tools on Remote Applications Barton P. Miller
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
© 2009 IBM Corporation Parallel Programming with X10/APGAS IBM UPC and X10 teams  Through languages –Asynchronous Co-Array Fortran –extension of CAF with.
CCSM Portability and Performance, Software Engineering Challenges, and Future Targets Tony Craig National Center for Atmospheric Research Boulder, Colorado,
August 2001 Parallelizing ROMS for Distributed Memory Machines using the Scalable Modeling System (SMS) Dan Schaffer NOAA Forecast Systems Laboratory (FSL)
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
MESQUITE: Mesh Optimization Toolkit Brian Miller, LLNL
Belgrade, 26 September 2014 George S. Markomanolis, Oriol Jorba, Kim Serradell Overview of on-going work on NMMB HPC performance at BSC.
Lawrence Livermore National Laboratory S&T Principal Directorate - Computation Directorate Tools and Scalable Application Preparation Project Computation.
CCSM Performance, Successes and Challenges Tony Craig NCAR RIST Meeting March 12-14, 2002 Boulder, Colorado, USA.
An HDF5-WRF module -A performance report MuQun Yang, Robert E. McGrath, Mike Folk National Center for Supercomputing Applications University of Illinois,
Third-party software plan Zhengji Zhao NERSC User Services NERSC User Group Meeting September 19, 2007.
Experiences with Co-array Fortran on Hardware Shared Memory Platforms Yuri DotsenkoCristian Coarfa John Mellor-CrummeyDaniel Chavarria-Miranda Rice University,
DOE Network PI Meeting 2005 Runtime Data Management for Data-Intensive Scientific Applications Xiaosong Ma NC State University Joint Faculty: Oak Ridge.
“NanoElectronics Modeling tool – NEMO5” Jean Michel D. Sellier Purdue University.
Xolotl: A New Plasma Facing Component Simulator Scott Forest Hull II Jr. Software Developer Oak Ridge National Laboratory
Single Node Optimization Computational Astrophysics.
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
Large Eddy Simulation of two phase flow combustion in gas turbines: Predicting extreme combustion processes in real engines Isabelle d’Ast - CERFACS.
Other Tools HPC Code Development Tools July 29, 2010 Sue Kelly Sandia is a multiprogram laboratory operated by Sandia Corporation, a.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Hernán García CeCalcULA Universidad de los Andes.
Introduction to HPC Debugging with Allinea DDT Nick Forrington
Tuning Threaded Code with Intel® Parallel Amplifier.
Introduction To Software Development Environment.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
Regional Climate Model Version 4.1 (RegCM4.1) Centre for Oceans, Rivers, Atmosphere and Land Sciences Indian Institute of Technology Kharagpur Kharagpur.
Initial Adaptation of the Advanced Regional Prediction System to the Alliance Environmental Hydrology Workbench Dan Weber, Henry Neeman, Joe Garfield and.
Is System X for Me? Cal Ribbens Computer Science Department
Introduction to parallelism and the Message Passing Interface
Department of Computer Science, University of Tennessee, Knoxville
Presentation transcript:

Climate-Weather modeling studies Using a Prototype Global Cloud-System Resolving Model Zhi Liang (GFDL/DRC)

A) Project Overview The project Climate-Weather modeling studies Using a Prototype Global Cloud-System Resolving Model Science goals Study the effect of clouds in climate-weather models The participants, description of team Chris Kerr (GFDL/UCAR, PI) V. Balaji (GFDL/Princeton University, PI) Zhi Liang (GFDL/DRC) Sponsor GFDL/NOAA

B) Science Lesson What does the application do, and how? Study the effect of clouds in climate-weather models:  Role of clouds is critical in global climate models  First generation experiments: atmospheric model  Second generation experiments: coupled atmosphere, ocean, … models Experiments will focus on 2008 "Year of Tropical Convection” research program:  12 km HIRAM hydrostatic model  3.5 km HIRAM non-hydrostatic model

C) Parallel Programming Model Hybrid model of MPI and OpenMP. Languages: Fortran 90 and C. Runtime libraries: netcdf, mpich Platforms: Cray XT6, Blue Gene/P, Blue Gene/Q, SGI Altix Ice Status: Runs OK Future Plan: Performance analysis on Blue Gene/Q.

E) I/O Patterns and Strategy Input I/O and output I/O patterns Distribute I/O. Use netcdf4. Approximate sizes of inputs and outputs Output about 600G for 1 month model time. Checkpoint / Restart capabilities Reproduce and intermediate restart. Future plans for I/O Improve the I/O scaling.

G) Performance What tools do you use now to explore performance Hardware performance counters, tau, Craypat What do you believe is your current bottleneck to better performance/scaling? Under investigation. Current status and future plans for improving performance Improve the MPI, OpenMP and I/O scaling.

H) Tools How do you debug your code? Print statement totalview ddt

I) Status and Scalability How does your application scale now? OK. Where do you want to be in a year? Better scaling. What are your top 5 pains? (be specific) –I/O scaling: large output. –MPI and OpenMP scaling. What did you change to achieve current scalability? Non-blocking communication, distributed IO.

Scalability

J) Roadmap What do you hope to learn / discover? Tools for OpenMP performance analysis. What improvements will you need to make: MPI, OpenMP and I/O scaling, load balance. What are your plans? Performance analysis.