Components and Concurrency in ESMF Nancy Collins 2005 Community Meeting July 21, 2005 GMAO Seasonal.

Slides:



Advertisements
Similar presentations
Computer Architecture
Advertisements

NSF NCAR | NASA GSFC | DOE LANL ANL | NOAA NCEP GFDL | MIT | U MICH Emergence of the Earth System Modeling Framework NSIPP Seasonal Forecast.
Chapter 3 Process Description and Control
Computer Systems/Operating Systems - Class 8
Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Fall 2010.
Concurrency CS 510: Programming Languages David Walker.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Chapter 6 – Concurrent Programming Outline 6.1 Introduction 6.2Monitors 6.2.1Condition Variables 6.2.2Simple Resource Allocation with Monitors 6.2.3Monitor.
29-Jun-15 Java Concurrency. Definitions Parallel processes—two or more Threads are running simultaneously, on different cores (processors), in the same.
Fundamentals of Python: From First Programs Through Data Structures
Shekoofeh Azizi Spring  CUDA is a parallel computing platform and programming model invented by NVIDIA  With CUDA, you can send C, C++ and Fortran.
Operating System A program that controls the execution of application programs An interface between applications and hardware 1.
NSF NCAR | NASA GSFC | DOE LANL ANL | NOAA NCEP GFDL | MIT Adoption and field tests of M.I.T General Circulation Model (MITgcm) with ESMF Chris Hill ESMF.
NSF NCAR | NASA GSFC | DOE LANL ANL | NOAA NCEP GFDL | MIT | U MICH First Field Tests of ESMF GMAO Seasonal Forecast NCAR/LANL CCSM NCEP.
What is ESMF and what does it mean to adopt it? 3 rd ESMF Community Meeting Cecelia DeLuca Nancy Collins
Coupling and Cohesion Pfleeger, S., Software Engineering Theory and Practice. Prentice Hall, 2001.
Threads, Thread management & Resource Management.
ESMF Development Status and Plans ESMF 4 th Community Meeting Cecelia DeLuca July 21, 2005 Climate Data Assimilation Weather.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Computational Design of the CCSM Next Generation Coupler Tom Bettge Tony Craig Brian Kauffman National Center for Atmospheric Research Boulder, Colorado.
Programmer's view on Computer Architecture by Istvan Haller.
9 Chapter Nine Compiled Web Server Programs. 9 Chapter Objectives Learn about Common Gateway Interface (CGI) Create CGI programs that generate dynamic.
Threaded Applications Introducing additional threads in a Delphi application is easy.
ESMF Code Generation Rocky Dunlap Spencer Rugaber Leo Mark Georgia Tech College of Computing.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
ESMF Application Status GMAO Seasonal Forecast NCAR/LANL CCSM NCEP Forecast GFDL FMS Suite MITgcm NCEP/GMAO Analysis Climate Data Assimilation.
The University of Adelaide, School of Computer Science
The use of modeling frameworks to facilitate interoperability Cecelia DeLuca/NCAR (ESMF) Bill Putman/NASA GSFC (MAPL) David Neckels/NCAR.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Games Development 2 Concurrent Programming CO3301 Week 9.
Ihr Logo Operating Systems Internals & Design Principles Fifth Edition William Stallings Chapter 2 (Part II) Operating System Overview.
CPS120: Introduction to Computer Science Lecture 14 Functions.
Components, Coupling and Concurrency in the Earth System Modeling Framework N. Collins/NCAR, C. DeLuca/NCAR, V. Balaji/GFDL, G. Theurich/SGI, A. da Silva/GSFC,
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
Processes CS 6560: Operating Systems Design. 2 Von Neuman Model Both text (program) and data reside in memory Execution cycle Fetch instruction Decode.
NSF NCAR / NASA GSFC / DOE LANL ANL / NOAA NCEP GFDL / MIT / U MICH May 15, 2003 Nancy Collins, NCAR 2nd Community Meeting, Princeton, NJ Earth System.
NCEP ESMF GFS Global Spectral Forecast Model Weiyu Yang, Mike Young and Joe Sela ESMF Community Meeting MIT, Cambridge, MA July 21, 2005.
NSF NCAR / NASA GSFC / DOE LANL ANL / NOAA NCEP GFDL / MIT / U MICH 15 May 2003 Cecelia DeLuca / NCAR 2 nd ESMF Community Meeting Princeton, NJ NSIPP Seasonal.
NSF NCAR / NASA GSFC / DOE LANL ANL / NOAA NCEP GFDL / MIT / U MICH May 14, 2003 Nancy Collins, NCAR Components Workshop, Princeton, NJ Components in the.
1. FINISHING FUNCTIONS 2. INTRODUCING PLOTTING 1.
How to write a MSGQ Transport (MQT) Overview Nov 29, 2005 Todd Mullanix.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Processes and Threads MICROSOFT.  Process  Process Model  Process Creation  Process Termination  Process States  Implementation of Processes  Thread.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 2.
NSF NCAR / NASA GSFC / DOE LANL ANL / NOAA NCEP GFDL / MIT / U MICH C. DeLuca/NCAR, J. Anderson/NCAR, V. Balaji/GFDL, B. Boville/NCAR, N. Collins/NCAR,
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
Coupling and Cohesion Schach, S, R. Object-Oriented and Classical Software Engineering. McGraw-Hill, 2002.
Coupling and Cohesion Pfleeger, S., Software Engineering Theory and Practice. Prentice Hall, 2001.
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes.
Tutorial 2: Homework 1 and Project 1
Processes and threads.
GMAO Seasonal Forecast
Background on the need for Synchronization
Advanced Topics in Concurrency and Reactive Programming: Asynchronous Programming Majeed Kassis.
Computer Engg, IIT(BHU)
Stack Lesson xx   This module shows you the basic elements of a type of linked list called a stack.
Programming with Shared Memory
Java Concurrency 17-Jan-19.
Threaded Programming in Python
Java Concurrency.
CS510 Operating System Foundations
Java Concurrency.
Programming with Shared Memory Specifying parallelism
CS561 Computer Architecture Hye Yeon Kim
Java Concurrency 29-May-19.
Threads CSE 2431: Introduction to Operating Systems
Presentation transcript:

Components and Concurrency in ESMF Nancy Collins Community Meeting July 21, GMAO Seasonal Forecast NCAR/LANL CCSM NCEP Forecast GFDL FMS Suite MITgcm NASA GSFC GSI

Current Status Starting with ESMF release ESMF Components can be executed concurrently as well as serially. Child components can be created on a subset of the parent component’s PETs (new petList argument to the ESMF_ComponentCreate() call) The main loop looks very much the same, but a new Blocking flag can be set to return immediately on all PETs which do not run the component. They can then start another component. Ensemble runs are now supported as well.

Sequential vs Concurrent

ESMF_StateReconcile() New ESMF_State call ESMF Communications are all computed in parallel by each PET. Each PET computes independently what data needs to be sent and to who, and what data will be received and from whom. The reconcile process ensures all PETs have the global grid information for all data items, even those without local data. Then they can compute who they will be receiving data from, or to whom they need to send data.

Creating Ensembles Multiple Components running the same module code. Each can compute with a different set of input parameters.

Ensembles seem: But they are:

Get/SetInternalState Each component derived type maintains a private pointer to an allocated user data block. Think shared vs private variables in OpenMP, for example.

Futures: Threading Threads allow a single processor to have more than one stream of execution. Threading is enabled in the Child component at SetServices time by making a VM Set call.

Sequential Execution All PETs run the same component at the same time -- but each component is still computing in parallel.

Sequential Execution Details The Framework has a thin code layer between actual calls from one Component to the other. The Parent Component controls the execution sequencing.

Concurrent Execution Concurrency means different PETs are executing different Components.

Concurrent Details (1) The Parent Component decides how to allocate resources for the Child Components at Component Create time by giving each Child a list of PETs. It also controls the execution sequence of Child components.

Concurrent Details (2) The Framework tracks which PETs are supposed to run a Component. If the Component Run routine is called with the Blocking flag set to Non-Blocking, the Run call returns immediately on those PETS which should not be executing the Component.

Concurrent with Blocking If Component Run is called with the Blocking flag on, the red lines show where the Framework adds Barriers on all PETs to ensure the Component has finished before any Child returns.

Communications: Redist Basics Redistribution is used when the exact same grid is decomposed in different ways in different components. Running sequentially, the source, destination, and coupler all access the same data with no problems. Each PET computes the intersection between grids independently and stores the send and receive sizes and offsets.

Redist on same set of PETs The communication code in ESMF is a “me-centric” model. Each PET computes what data it needs to send and to who, and what data is expects to receive and from who. This scales well as number of processors grows. It does require each PET has the same information about the global grid and its decomposition.

Redist on Exclusive PETs What happens when the coupler tries to couple Components which do not run on exactly the same number of PETs, or exclusive PET sets.

After ESMF_StateReconcile() After calling ESMF_StateReconcile(), all PETs have the same global grid information, even if they contain no local data for that Field/Grid. Now the communication code can determine if it will be receiving data from a remote grid, or if it needs to send data to a remote grid.

Creating Ensembles Ensembles are running the same code with different parameter settings.

Ensemble Details ESMF ensembles running in the same executable look much like exclusive Components which do not communicate between themselves.

How Ensembles Seem What you might think you have when you create multiple components using the same module and entry points.

How Ensembles Are But in reality the two instances of the same component share both code and module global data. If one Component modifies the global data, all Components see the change.

SetInternalState at Init Time During Init, allocate a local derived type and store the pointer in the framework. There will be a separate pointer per instance of each component.

GetInternalState at Run Time During Run, retrieve the InternalState pointer from the framework. Using data inside that derived type, determine any per-instance settings, variables, local data, etc.

Futures: Transforms Transforms allow communication without returning to the calling Component. It weakens the isolation between Components by introducing communication sequencing dependencies.