WRF Outline Overview and Status WRF Q&A

Slides:



Advertisements
Similar presentations
SPEC ENV2002: Environmental Simulation Benchmarks Wesley Jones
Advertisements

Weather Research & Forecasting: A General Overview
WRF Modeling System V2.0 Overview
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 12 Slide 1 Distributed Systems Design 2.
Study of Hurricane and Tornado Operating Systems By Shubhanan Bakre.
1 WRF Development Test Center A NOAA Perspective WRF ExOB Meeting U.S. Naval Observatory, Washington, D.C. 28 April 2006 Fred Toepfer NOAA Environmental.
Software Group © 2006 IBM Corporation Compiler Technology Task, thread and processor — OpenMP 3.0 and beyond Guansong Zhang, IBM Toronto Lab.
Eta Model. Hybrid and Eta Coordinates ground MSL ground Pressure domain Sigma domain  = 0  = 1  = 1 Ptop  = 0.
Nesting. Eta Model Hybrid and Eta Coordinates ground MSL ground Pressure domain Sigma domain  = 0  = 1  = 1 Ptop  = 0.
Mesoscale & Microscale Meteorological Division / NCAR ESMF and the Weather Research and Forecast Model John Michalakes, Thomas Henderson Mesoscale and.
Establishing the overall structure of a software system
Weather Research & Forecasting Model (WRF) Stacey Pensgen ESC 452 – Spring ’06.
1 / 26 CS 425/625 Software Engineering Architectural Design Based on Chapter 11 of the textbook [SE-8] Ian Sommerville, Software Engineering, 8t h Ed.,
Coupling ROMS and WRF using MCT
Advance the understanding and the prediction of mesoscale precipitation systems and to promote closer ties between the research and operational forecasting.
Template Development of a Plume-in-Grid Version of Global-through-Urban WRF/Chem Prakash Karamchandani, Krish Vijayaraghavan, Shu-Yun Chen ENVIRON International.
Components and Concurrency in ESMF Nancy Collins Community Meeting July 21, GMAO Seasonal.
NSF NCAR | NASA GSFC | DOE LANL ANL | NOAA NCEP GFDL | MIT Adoption and field tests of M.I.T General Circulation Model (MITgcm) with ESMF Chris Hill ESMF.
1 NOAA’s Environmental Modeling Plan Stephen Lord Ants Leetmaa November 2004.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Architectural Design l Establishing the overall structure of a software system.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
NE II NOAA Environmental Software Infrastructure and Interoperability Program Cecelia DeLuca Sylvia Murphy V. Balaji GO-ESSP August 13, 2009 Germany NE.
ESMF Development Status and Plans ESMF 4 th Community Meeting Cecelia DeLuca July 21, 2005 Climate Data Assimilation Weather.
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
Computational Design of the CCSM Next Generation Coupler Tom Bettge Tony Craig Brian Kauffman National Center for Atmospheric Research Boulder, Colorado.
Initial Results from the Integration of Earth and Space Frameworks Cecelia DeLuca/NCAR, Alan Sussman/University of Maryland, Gabor Toth/University of Michigan.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Architectural Design l Establishing the overall structure of a software system.
Parallel Computing Through MPI Technologies Author: Nyameko Lisa Supervisors: Prof. Elena Zemlyanaya, Prof Alexandr P. Sapozhnikov and Tatiana F. Sapozhnikov.
Using HDF5 in WRF Part of MEAD - an alliance expedition.
Model Coupling Environmental Library. Goals Develop a framework where geophysical models can be easily coupled together –Work across multiple platforms,
Fly - Fight - Win 16 th Weather Squadron Evan Kuchera Fine Scale Models and Ensemble 16WS/WXN Template: 28 Feb 06 Air Force Weather Ensembles.
The use of modeling frameworks to facilitate interoperability Cecelia DeLuca/NCAR (ESMF) Bill Putman/NASA GSFC (MAPL) David Neckels/NCAR.
Chapter 2 Operating System Overview
Higher Resolution Operational Models. Operational Mesoscale Model History Early: LFM, NGM (history) Eta (mainly history) MM5: Still used by some, but.
Earth System Modeling Framework Status Cecelia DeLuca NOAA Cooperative Institute for Research in Environmental Sciences University of Colorado, Boulder.
TAL7011 – Lecture 4 UML for Architecture Modeling.
Components, Coupling and Concurrency in the Earth System Modeling Framework N. Collins/NCAR, C. DeLuca/NCAR, V. Balaji/GFDL, G. Theurich/SGI, A. da Silva/GSFC,
3 rd Annual WRF Users Workshop Promote closer ties between research and operations Develop an advanced mesoscale forecast and assimilation system   Design.
Ligia Bernardet, S. Bao, C. Harrop, D. Stark, T. Brown, and L. Carson Technology Transfer in Tropical Cyclone Numerical Modeling – The Role of the DTC.
Higher Resolution Operational Models. Major U.S. High-Resolution Mesoscale Models (all non-hydrostatic ) WRF-ARW (developed at NCAR) NMM-B (developed.
1 11/25/2015 Developmental Testbed Center (DTC) Bob Gall June 2004.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
CSC480 Software Engineering Lecture 10 September 25, 2002.
Slides for NUOPC ESPC NAEFS ESMF. A NOAA, Navy, Air Force strategic partnership to improve the Nation’s weather forecast capability Vision – a national.
NSF NCAR / NASA GSFC / DOE LANL ANL / NOAA NCEP GFDL / MIT / U MICH May 15, 2003 Nancy Collins, NCAR 2nd Community Meeting, Princeton, NJ Earth System.
NCEP ESMF GFS Global Spectral Forecast Model Weiyu Yang, Mike Young and Joe Sela ESMF Community Meeting MIT, Cambridge, MA July 21, 2005.
NSF NCAR / NASA GSFC / DOE LANL ANL / NOAA NCEP GFDL / MIT / U MICH May 14, 2003 Nancy Collins, NCAR Components Workshop, Princeton, NJ Components in the.
1 National Environmental Modeling System (NEMS) Status M. Iredell and EMC Staff.
1 2. Overview of WRF Program Status WRF Executive Oversight Board Meeting 2 30 July 2004.
HPD -- A High Performance Debugger Implementation A Parallel Tools Consortium project
Earth System Curator and Model Metadata Discovery and Display for CMIP5 Sylvia Murphy and Cecelia Deluca (NOAA/CIRES) Hannah Wilcox (NCAR/CISL) Metafor.
WRF Software Development and Performance John Michalakes, NCAR NCAR: W. Skamarock, J. Dudhia, D. Gill, A. Bourgeois, W. Wang, C. Deluca, R. Loft NOAA/NCEP:
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
ESMF,WRF and ROMS. Purposes Not a tutorial Not a tutorial Educational and conceptual Educational and conceptual Relation to our work Relation to our work.
Shu-Hua Chen University of California, Davis eatheresearch & orecasting
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
Slide 1 NEMOVAR-LEFE Workshop 22/ Slide 1 Current status of NEMOVAR Kristian Mogensen.
State of ESMF: The NUOPC Layer Gerhard Theurich NRL/SAIC ESMF Executive Board / Interagency Working Group Meeting June 12, 2014.
Mesoscale Modeling Jon Schrage Summer WRF-“Weather Research and Forecasting” Developed by: – National Center for Atmospheric Research (NCAR) – the.
Mesoscale & Microscale Meteorological Division / NCAR WRF ESMF Development Tom Henderson, John Michalakes National Center for Atmospheric Research Mesoscale.
The NOAA Environmental Modeling System at NCEP Mark Iredell and the NEMS group NOAA/NWS/NCEP Environmental Modeling Center June 12, 2014.
NSF NCAR / NASA GSFC / DOE LANL ANL / NOAA NCEP GFDL / MIT / U MICH C. DeLuca/NCAR, J. Anderson/NCAR, V. Balaji/GFDL, B. Boville/NCAR, N. Collins/NCAR,
Slide 1 Chapter 8 Architectural Design. Slide 2 Topics covered l System structuring l Control models l Modular decomposition l Domain-specific architectures.
Department of Computer Science, Johns Hopkins University Lecture 7 Finding Concurrency EN /420 Instructor: Randal Burns 26 February 2014.
Reference Implementation of the High Performance Debugging (HPD) Standard Kevin London ( ) Shirley Browne ( ) Robert.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Keith Kelley Presentation 2 Weather Forecasting with Parallel Computers.
FUJIN: a parallel framework for meteorological models
MPJ: A Java-based Parallel Computing System
Presentation transcript:

WRF Outline Overview and Status WRF Q&A www.wrf-model.org John Michalakes Mesoscale and Microscale Meteorology Division National Center for Atmospheric Research michalak@ucar.edu

Weather Research and Forecast (WRF) Model Principal Partners: NCAR Mesoscale and Microscale Meteorology Division NOAA National Centers for Environmental Prediction NOAA Forecast Systems Laboratory OU Center for the Analysis and Prediction of Storms U.S. Air Force Weather Agency U.S. DoD HPCMO NRL Marine Meteorology Division Federal Aviation Administration Additional Collaborators: NOAA Geophysical Fluid Dynamics Laboratory NASA GSFC Atmospheric Sciences Division NOAA National Severe Storms Laboratory EPA Atmospheric Modeling Division University Community Large, collaborative effort; pool resources/talents Develop advanced community mesoscale and data-assimilation system: Focus on 1-10km; accurate, efficient, scalable over broad range of scales Advanced physics, data assimilation, nesting Flexible, modular, performance-portable with single-source code

WRF Software Architecture Driver Driver Layer Config Inquiry DM comm I/O API Package Independent Mediation Layer Solve OMP Config Module WRF Tile-callable Subroutines Data formats, Parallel I/O Message Passing Package Dependent Model Layer Threads Driver layer: akin to a framework. External Packages Driver: I/O, communication, multi-nests, state data Model routines computational, tile-callable, thread-safe Mediation layer: interface between model and driver; (also handles dereferencing of driver layer objects to simple data structures for model layer) Interfaces to external packages

WRF Irons in Fire New Dynamical Cores Data Assimilation NCEP: NMM core (June 2003) NCEP: semi-implicit semi-Lagrangian core (?) NRL: COAMPS integration (?) China Met. Admin and GRAPS integration (June 2004) Data Assimilation WRF 3DVAR (later 2003) WRF 4DVAR (2005-6) Development Initiatives WRF Developmental Testbed Center (Summer 2003 and ongoing) Hurricane WRF (2006) NOAA air quality initiative (WRF-Chem) (2003-04) NCSA: WRF/ROMS coupling using MCT (MEAD) (2003 and ongoing) DoD: WRF/HFSOLE coupling using MCEL (PET) (2003) WRF Software Development WRF nesting and research release (later 2003) Vector performance: Earth Simulator, Cray X-1 (?) NASA ESMF integration 2004: start with time manager, proof-of-concept dry dynamics NCSA: MEAD and HDF5 I/O (?)

WRF Q&A 1. What system requirements do you have - e.g. Unix/Linux, CORBA, MPI, Windows,... UNIX/Linux, MPI, OpenMP (optional) Happiest > .5GB memory per distributed memory process 2. Can components spawn other components? What sort of relationships are allowed? directory structure model, parent child process model, flat model, peer-to-peer model, client-server etc.... WRF can spawn nested domains within this component. No other spawning Applications are basically peer to peer, though the underlying coupling infrastructure may be implemented as client server or other models 3. What programming language restrictions do you have currently? Using Fortran90 and C but have no restrictions per se 4. Are you designing to be programming language neutral? Yes. We enjoin against passing derived data types through the model-layer interface for this reason. (This has been violated by 3DVAR as implemented in the WRF ASF, however) 5. What sort of component abstraction do you present to an end user? Atmosphere component, Ocean component, an Analysis component, a generic component etc... Generic. The person putting the components together is required to know what each needs from the other. Models see coupling as I/O and read and write coupling data through the WRF I/O/coupling API

WRF Q&A 9. Does data always have to be copied between components or are there facilities for sharing references to common structures across component boundaries? When, how and why is this needed? All data is exchanged through the WRF I/O/Coupling API; how this is implemented is up to the API – however, the API doesn't presently have semantics for specifying structures that are common across component boundaries 13. What levels and "styles" of parallelism/concurrency or serialization are permitted/excluded. e.g. can components be internally parallel, can multiple components run concurrently, can components run in parallel WRF runs distributed-memory, shared-memory, or hybrid. WRF I/O/Coupling API is assumed non-thread-safe. No restriction on concurrency/parallelism with other components. 14. Do components always have certain specific functions/methods? WRF always produces a time integration of the atmosphere; however, there are several dynamics options and numerous physics options. Note: the time-loop is in the driver layer and it's straightforward to run over framework specified intervals (the nested domains do this under the control of parent domains)

WRF Q&A 15. What, if any, virtualization of process, thread and physical CPU do you use? Unit of work is a tile; model layer subroutines are designed to be "tile-callable". Distributed memory: mapping of distributed memory processes to OS-level processes and physical processors is up to the underlying comm layer. Shared memory: up to the underlying thread package (we currently use OpenMP) 16. Can components come and go arbitrarily throughout execution? WRF nested domains (which are part of this component) can by dynamically instantiated/uninstantiated. WRF as a component has a relatively high overhead for starting up; wouldn't be ideal as a transient component 17. Can compute resources be acquired/released throughout execution? Definitely not at the application layer at this time; decomposition is basically static and the number of distributed memory processes can not change over the course of a run. We intend to allow migration of work between processes for LB. Whole processes might someday migrate, but WRF would have to include support for migration of state (which it does not currently have). Using smaller or larger number of shared-memory threads would be up to the thread package.

WRF Q&A 18. Does you system have an initialization phase? Yes, big one. Initial I/O is relatively costly 19. Is the high-level control syntax the same in different circumstances? e.g. serial component to serial component, versus parallel M component to parallel N component. Not strictly applicable, since we're talking about a single component, WRF. However, WRF can be easily subroutine-ized. 20. What standards must components adhere to - languages, standard functions/API's, standard semantics etc... Standards in WRF apply internally between layers in software hierarchy and in API's to external packages. The API's are WRF-specific, allowing flexibility over a range of implementations. Plan to merge WRF I/O/Coupling API with ESMF-API specification provided it gave similar functionality and interfaced with WRF in the same way 23. What is your approach to saving and restoring system state? Write and read restart data sets at user specified intervals. 26. Who are your target component authors. Physics developers, dynamical core developers, and WRF development team