Model Coupling Environmental Library. Goals Develop a framework where geophysical models can be easily coupled together –Work across multiple platforms,

Slides:



Advertisements
Similar presentations
Introduction to Grid Application On-Boarding Nick Werstiuk
Advertisements

Section 6.2. Record data by magnetizing the binary code on the surface of a disk. Data area is reusable Allows for both sequential and direct access file.
COM vs. CORBA.
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
Data Grids: Globus vs SRB. Maturity SRB  Older code base  Widely accepted across multiple communities  Core components are tightly integrated Globus.
Software Frameworks for Acquisition and Control European PhD – 2009 Horácio Fernandes.
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
A. Frank - P. Weisberg Operating Systems Introduction to Tasks/Threads.
Chapter 6: An Introduction to System Software and Virtual Machines
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
1 MPI-2 and Threads. 2 What are Threads? l Executing program (process) is defined by »Address space »Program Counter l Threads are multiple program counters.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Building Coupled Parallel and Distributed Scientific Simulations with InterComm Alan Sussman Department of Computer Science & Institute for Advanced Computer.
Institute of Computer and Communication Network Engineering OFC/NFOEC, 6-10 March 2011, Los Angeles, CA Lessons Learned From Implementing a Path Computation.
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Robert Fourer, Jun Ma, Kipp Martin Copyright 2006 An Enterprise Computational System Built on the Optimization Services (OS) Framework and Standards Jun.
Threads, Thread management & Resource Management.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Data Analysis using Java Mobile Agents Mark Dönszelmann, Information, Process and Technology Group, IT, CERN ATLAS Software Workshop Analysis Tools Meeting,
DCE (distributed computing environment) DCE (distributed computing environment)
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
SUMA: A Scientific Metacomputer Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García,
Processes and Threads CS550 Operating Systems. Processes and Threads These exist only at execution time They have fast state changes -> in memory and.
IBM OS/2 Warp Mike Storck Matt Kerster Mike Roe Patrick Caldwell.
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
SciDAC All Hands Meeting, March 2-3, 2005 Northwestern University PIs:Alok Choudhary, Wei-keng Liao Graduate Students:Avery Ching, Kenin Coloma, Jianwei.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Center for Component Technology for Terascale Simulation Software CCA is about: Enhancing Programmer Productivity without sacrificing performance. Supporting.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
Earth System Modeling Framework Python Interface (ESMP) October 2011 Ryan O’Kuinghttons Robert Oehmke Cecelia DeLuca.
Grid Computing Framework A Java framework for managed modular distributed parallel computing.
CPSC 171 Introduction to Computer Science System Software and Virtual Machines.
CSI 3125, Preliminaries, page 1 SERVLET. CSI 3125, Preliminaries, page 2 SERVLET A servlet is a server-side software program, written in Java code, that.
HPD -- A High Performance Debugger Implementation A Parallel Tools Consortium project
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
WebFlow High-Level Programming Environment and Visual Authoring Toolkit for HPDC (desktop access to remote resources) Tomasz Haupt Northeast Parallel Architectures.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Silberschatz and Galvin  Operating System Concepts Module 1: Introduction What is an operating system? Simple Batch Systems Multiprogramming.
GPU Computing for GIS James Mower Department of Geography and Planning University at Albany.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
Review of PARK Reflectometry Group 10/31/2007. Outline Goal Hardware target Software infrastructure PARK organization Use cases Park Components. GUI /
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Compute and Storage For the Farm at Jlab
Invitation to Computer Science 6th Edition
Applied Operating System Concepts
Design Components are Code Components
Chapter 3: Process Concept
GWE Core Grid Wizard Enterprise (
Spark Presentation.
Process Management Presented By Aditya Gupta Assistant Professor
Constructing a system with multiple computers or processors
Introduction to Apache
FUJIN: a parallel framework for meteorological models
Operating System Concepts
Constructing a system with multiple computers or processors
Language Processors Application Domain – ideas concerning the behavior of a software. Execution Domain – Ideas implemented in Computer System. Semantic.
Threads Chapter 4.
Outline Operating System Organization Operating System Examples
Operating System Concepts
Types of Parallel Computers
Presentation transcript:

Model Coupling Environmental Library

Goals Develop a framework where geophysical models can be easily coupled together –Work across multiple platforms, operating systems –Simple to use Minimal API Easy to add functionality –Minimally intrusive into existing code –Work for both one and two way coupling –Flexible coupling scheduling –“Reasonably” high performance

Target User Base MCEL was designed to be used at the DoD research laboratories and operational centers –Oceanographers and meteorologists, not computer scientists. Simplicity over functionality! –FORTRAN 77 is a modern language MCEL incorporates a simple FORTRAN and C API –29 Total commands –OOPs languages can access MCEL directly Typically requires about 100 additional lines of code to legacy model to couple using MCEL

Current Approach Server/client approach –Centralized server(s) stores coupling information –Clients (numerical models) store and retrieve data from server Data passes through filter(s) between server and client –Filters alter data into ready to use form for clients –Interpolation, data combination, physics… –Filters not tied to any models, at most tied to numerics/physics – Reusable Server archives coupling data, bunches of netCDF files which can be relocated and restarted –Dedicated thread to file output, run at a lower priority not to slow services –Once data is stored, it CAN be scrubbed from memory, scrubbing handled by dedicated thread

Typical Example Ocean Rad Stress Waves Surface Forcing Waves Server Atmospheric Wave Winds Wind/Wave Energy xfer Winds SST Interpolation SST Atmospheric Interpolation

Communication Details Communication handled through CORBA –Cross platform –Free –Multi language support (C++, Java, Python …) MCEL API has FORTRAN/C bindings, object hiding –Security through SSL –Passive waits available Over-subscription of CPUs –Components can come and go as needed –Built in multithreaded support

Communication Infrastructure

Component Details Currently, all processes initiated by the components Models are required to “register” themselves with the coupling framework –Define grid and variable information Grids can be either one, two or three dimensional: regular, structured or unstructured: Lat/Long or Cartesian Coordinates Variables can be one of the standard MPI types Models then set up filter tree Models periodically store data to server Models periodically request data from filter(s) Models use standard time stamp to mark all data

Component Details Cont. Components are all generic, coupling is performed through variable name matching Components can use any parallelization scheme –MPI, OpenMP, Vector, pthreads… –Each component process/thread can communicate independently or through the master Component N to M communication results in N-1-M communication. However, 1 is multithreaded and work is ongoing to fix this API allows data to be pre-fetched in a non-blocking way Concept: On cluster of SMP machines about looking in a local cache first before going to the server

Data Shipment Upon model registration, grid and variable information is passed around between components, filters and servers During operation, only the raw data and variable/temporal information is passed around –Unless a model has had a grid change, then new grid info is distributed Filter outputs available to be shared if still in local cache Caching of filter inputs being incorporated –MPI application requesting the same data for each task

Filter Details Filters allow the encapsulation of physics or numerics into a reusable package such as interpolation Three basic functions, initialize, evaluate and shutdown Inherited off virtual C++ base class which include all the basic MCEL communication infrastructure –Just supply your filter numerics in any language Filters created in “factories” and can created and destroyed as required Filters are multithreaded Filters can have multiple parents and multiple children –Compute once, share to many –Cache flushed on subsequent calls

Performance Details Results are for a 512x512 grid. If interpolation is used, going to an unstructured grid with 40K points –MCEL storage and retrievals typically on the order of five milliseconds (No filters used) for a single process on the same node as the server –MCEL storage and retrievals typically on the order of ten milliseconds (No filters used) for a single process on a different node as the server (Compaq or IBM) –MCEL storage typically on the order of thirty milliseconds (No filters used) for a 32 process on different nodes as the server (Compaq or IBM) –MCEL retrieval time through an interpolation filter for a 32 process parallel job requires typically 0.1 to 0.2 seconds

Models Currently MCEL Aware ADCIRC ADH COAMPS HYCOM LSOM NCOM REF/DIF STWAVE SWAN WAM WaveWatch WRF

Closing MCEL designed with legacy codes in mind and to be as easy to use as possible MCEL reasonable for coupling every timestep at the most –Excessive overhead for inner-loop coupling MCEL is in alpha release MCEL is publicly available via anonymous CVS MCEL tutorial available at the DoD HPCMO UGC meeting in June