Slide 1 NEMOVAR-LEFE Workshop 22/3 2007. Slide 1 Current status of NEMOVAR Kristian Mogensen.

Slides:



Advertisements
Similar presentations
Assimilation of radar data - research plan
Advertisements

GEMS Kick- off MPI -Hamburg CTM - IFS interfaces GEMS- GRG Review of meeting in January and more recent thoughts Johannes Flemming.
© Crown copyright Met Office NEMOVAR status and plans Matt Martin, Dan Lea, Jennie Waters, James While, Isabelle Mirouze NEMOVAR SG, ECMWF, Jan 2012.
© Crown copyright Met Office Implementing a diurnal model at the Met Office James While, Matthew Martin.
Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University.
Using PALM to integrate variational data assimilation algorithms with OPA ECMWF, 29 june 2005, Reading. Nicolas Daget CERFACS, Toulouse.
1 Optimizing compilers Managing Cache Bercovici Sivan.
AlphaZ: A System for Design Space Exploration in the Polyhedral Model
Assimilation Algorithms: Tangent Linear and Adjoint models Yannick Trémolet ECMWF Data Assimilation Training Course March 2006.
The Inverse Regional Ocean Modeling System:
Status and performance of HIRLAM 4D-Var Nils Gustafsson.
Parallelisation of Nonlinear Structural Analysis using Dual Partition Super-Elements G.A. Jokhio and B.A. Izzuddin.
Polly Smith, Alison Fowler, Amos Lawless School of Mathematical and Physical Sciences, University of Reading Exploring coupled data assimilation using.
Software Process Models
TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader MERLIN is a.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Advanced data assimilation methods with evolving forecast error covariance Four-dimensional variational analysis (4D-Var) Shu-Chih Yang (with EK)
Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD Istvan Szunyogh representing the UMD “Chaos-Weather” Group Ensemble.
Exploring strategies for coupled 4D-Var data assimilation using an idealised atmosphere-ocean model Polly Smith, Alison Fowler & Amos Lawless School of.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
The Inverse Regional Ocean Modeling System: Development and Application to Data Assimilation of Coastal Mesoscale Eddies. Di Lorenzo, E., Moore, A., H.
Coupled Model Data Assimilation: Building an idealised coupled system Polly Smith, Amos Lawless, Alison Fowler* School of Mathematical and Physical Sciences,
Computational issues in Carbon nanotube simulation Ashok Srinivasan Department of Computer Science Florida State University.
Computing a posteriori covariance in variational DA I.Gejadze, F.-X. Le Dimet, V.Shutyaev.
Different options for the assimilation of GPS Radio Occultation data within GSI Lidia Cucurull NOAA/NWS/NCEP/EMC GSI workshop, Boulder CO, 28 June 2011.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Computational Issues: An EnKF Perspective Jeff Whitaker NOAA Earth System Research Lab ENIAC 1948“Roadrunner” 2008.
Y. Kotani · F. Ino · K. Hagihara Springer Science + Business Media B.V Reporter: 李長霖.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
Ocean Data Variational Assimilation with OPA: Ongoing developments with OPAVAR and implementation plan for NEMOVAR Sophie RICCI, Anthony Weaver, Nicolas.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Data assimilation and observing systems strategies Pierre Gauthier Data Assimilation and Satellite Meteorology Division Meteorological Service of Canada.
Computer Organization David Monismith CS345 Notes to help with the in class assignment.
Soil moisture generation at ECMWF Gisela Seuffert and Pedro Viterbo European Centre for Medium Range Weather Forecasts ELDAS Interim Data Co-ordination.
HIRLAM 3/4D-Var developments Nils Gustafsson, SMHI.
The I nverse R egional O cean M odeling S ystem Development and Application to Variational Data Assimilation of Coastal Mesoscale Eddies. Di Lorenzo, E.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
The status and development of the ECMWF forecast model M. Hortal, M. Miller, C. Temperton, A. Untch, N. Wedi ECMWF.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Parallelization of likelihood functions for data analysis Alfio Lazzaro CERN openlab Forum on Concurrent Programming Models and Frameworks.
Oct WRF 4D-Var System Xiang-Yu Huang, Xin Zhang Qingnong Xiao, Zaizhong Ma, John Michalakes, Tom henderson and Wei Huang MMM Division National.
2005 ROMS/TOMS Workshop Scripps Institution of Oceanography La Jolla, CA, October 25, D Variational Data Assimilation Drivers Hernan G. Arango IMCS,
Threaded Programming Lecture 2: Introduction to OpenMP.
5/13/99 Ashish Sabharwal1 Pipelining and Hazards n Hazards occur because –Don’t have enough resources (ALU’s, memory,…) Structural Hazard –Need a value.
An Overview of ROMS Code Kate Hedstrom, ARSC April 2007.
1 SICBDST and Brunel Migration status and plans. 2 Migration Step 1: SICBMC/SICBDST split  Last LHCb week: Split done but not tested  Software week.
High Performance LU Factorization for Non-dedicated Clusters Toshio Endo, Kenji Kaneda, Kenjiro Taura, Akinori Yonezawa (University of Tokyo) and the future.
The I nverse R egional O cean M odeling S ystem Development and Application to Variational Data Assimilation of Coastal Mesoscale Eddies. Di Lorenzo, E.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
Slide 1 Running NEMO at ECMWF Slide 1 NEMO under Perforce at ECMWF Kristian S. Mogensen Last revised:
PreSAC Progress on NEMOVAR. Overview of NEMOVAR status First NEMOVAR experiments Use of NEMOVAR analyses to initialize ocean only forecasts Missing.
November 21 st 2002 Summer 2009 WRFDA Tutorial WRF-Var System Overview Xin Zhang, Yong-Run Guo, Syed R-H Rizvi, and Michael Duda.
Slide 1 NEMOVAR data structure Slide 1 NEMOVAR data structure for observation profiles Kristian Mogensen.
1 Data assimilation developments for NEMO  A working group was created at the 2006 Developers Meeting with the objective of standardizing for NEMO certain.
Auburn University
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Computational Techniques for Efficient Carbon Nanotube Simulation
Progress in development of HARMONIE 3D-Var and 4D-Var Contributions from Magnus Lindskog, Roger Randriamampianina, Ulf Andrae, Ole Vignes, Carlos Geijo,
4D-VAR Optimization Efficiency Tuning
Parallel Programming By J. H. Wang May 2, 2017.
L21: Putting it together: Tree Search (Ch. 6)
ECMWF activities: Seasonal and sub-seasonal time scales
Software Process Models
ECMWF "uncoupled" coupled model: Coupling using subroutine calls
Laura Bright David Maier Portland State University
Development of an advanced ensemble-based ocean data assimilation approach for ocean and coupled reanalyses Eric de Boisséson, Hao Zuo, Magdalena Balmaseda.
Computational Techniques for Efficient Carbon Nanotube Simulation
Iteration Planning.
Parallel Programming in C with MPI and OpenMP
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Slide 1 NEMOVAR-LEFE Workshop 22/ Slide 1 Current status of NEMOVAR Kristian Mogensen

Slide 2 NEMOVAR-LEFE Workshop 22/ Slide 2 Outline of talk: Why do we need NEMOVAR? -Isn’t OPAVAR good enough? Goals of the NEMOVAR project Implementation plan Where are we now Outstanding issues

Slide 3 NEMOVAR-LEFE Workshop 22/ Slide 3 Why do we need NEMOVAR? Anthony Weaver et al from CERFACS has been developing a variational data assimilation system for OPA version 8.2 (OPAVAR). -Incremental approach -Supports 3D-VAR (FGAT) and 4D-VAR -Has been used with the ORCA2 grid and the TDH tropical pacific area grid -Written mostly in the OPA 8.2 coding style (Fortran-77) with a few extensions  Dynamic memory in a few routines -No distributed memory (MPI) parallelization, only shared memory (OpenMP) parallelization  The OpenMP scaling is not spectacular  Scaling to higher resolution than ORCA2 problematic due to memory constraints (>26 GB needed for ORCA1 needed) OPA 8.2 is not actively developed anymore All work within the OPA developers team are focussed on the new NEMO version of the OPA model

Slide 4 NEMOVAR-LEFE Workshop 22/ Slide 4 ECMWF/CERFACS goals for the NEMOVAR project Short term (in ~2 years) goal -To have a 3D-VAR system based on NEMO -Support distributed memory parallelization  Possible also support shared memory parallelization -Support for different ORCA configurations  We do not worry about limited area versions of NEMO -Support for profiles and altimeter observations and SST products  It should easy to add a new observation type -Multi-incremental with different resolution in the inner loops compared to the outer loops  Not a trivial task Long term goal -A full 4D-VAR system with all of the above properties

Slide 5 NEMOVAR-LEFE Workshop 22/ Slide 5 Splitting the variational problem into outer and inner loops: 1.In the initial outer loop the following is done: 1.Compute misfit of observations relative to background 2.Store initial trajectory for the inner loop 2.In the inner loops the following is done: 1.Minimize the incremental cost function using an iterative procedure to produce an increment 3.In the subsequent outer loop the following is done: 1.Update trajectory with the increment 2.Update misfit of observations 3.Update misfit of background 4.IF ( iloop < noutloop ) GOTO 2

Slide 6 NEMOVAR-LEFE Workshop 22/ Slide 6 Implementation plan: overview We have defined the following plan: -Phase 1: Split the existing OPAVAR Fortran code into separate executables for inner and outer loops -Phase 2: Develop an MPP implementation of the observation operators in the outer loop using NEMO -Phase 3: Develop a hybrid system with NEMO in the outer loop and OPAVAR in the inner loop -Phase 4: Develop an MPP implementation of the 3D-VAR with NEMO in both inner and outer loops -Phase 5: Develop an MPP implementation of the full 4D-VAR with NEMO in both inner and outer loops Phase 1 and 2 can done in parallel By phase 4 we will have archived our short term goal By phase 5 we will have archived our long term goal

Slide 7 NEMOVAR-LEFE Workshop 22/ Slide 7 Implementation plan: phase 1 Split the existing OPAVAR Fortran code into separate executables for inner and outer loops -This is needed for phase 3 -Will allow scientific developments to use OPAVAR to continue work in parallel with the NEMOVAR work  We don’t want this to stop while we do the technical work  Not all options of OPAVAR will be migrated to NEMOVAR -Supports both 3D-VAR and 4D-VAR and combinations of both for multiple inner loops -Will verify the approach of using different executables in the inner and the outer loop  Similar to what is done for the IFS 4D-VAR system

Slide 8 NEMOVAR-LEFE Workshop 22/ Slide 8 Implementation plan: phase 2 Adding observation operators to NEMO -Import interpolation routines from OPAVAR -Distribute the observations according to NEMO domain decomposition initially  Load imbalance in the observation operators? -Initially we will focus on the following observations:  Profiles (XBT’s, Argos etc.)  SLA data  SST data -Easy extendable to other observations Can be used for other assimilation schemes The observation operators can be used for diagnostics to compare a model run with observations

Slide 9 NEMOVAR-LEFE Workshop 22/ Slide 9 Implementation plan: phase 3 Develop a hybrid system with NEMO in the outer loop and OPAVAR in the inner loop -Useful to verify the implementation of the NEMO outer loop -Since the NEMO inner loop is the “hard” part of the migration to NEMO this system will be useful for scientific developments for some time Writing of model trajectories in NEMO for input to the OPA inner loop will have to be coded Reading of increments for applying the them to the non-linear states will have to be coded as well The later 2 items are relevant for phase 4 We will probably not worry too much about MPP for phase 3 Can in principle run both 3D-VAR and 4D-VAR

Slide 10 NEMOVAR-LEFE Workshop 22/ Slide 10 Implementation plan: phase 4 Develop an MPP implementation of the 3D-VAR with NEMO in both inner and outer loops -Parallelization of the control vector will be done based on ECMWF IFS experiences  A Fortran-90 derived type (control_vectors) is used to defined the distributed memory layout of the control vector  Fortran-90 operations like assignment and dot product are overloaded for this type and all message parsing is done in the overloaded functions  Flexible code for non MPP developers -The minimization of the cost function is going to be based on work done at CERFACS -Initially we will assume the same resolution in the outer and inner loop  Later we will consider different resolution in the outer and the inner loop

Slide 11 NEMOVAR-LEFE Workshop 22/ Slide 11 Implementation plan: phase 5 The full 4D-VAR system is dependent on the existence of the tangent linear and adjoint of the NEMO model We will get the status for this in the next talk We aim to have the code of phase 4 flexible enough that adding 4D-VAR as an option is easy once the TL/AD is available

Slide 12 NEMOVAR-LEFE Workshop 22/ Slide 12 Where are we now Phase 1 (the splitting of OPAVAR) has been completed. Phase 2 (observation operators in NEMO) is well under way -It has been done for profile observations -Work on including altimeter is in progress -Work SST data will start shortly Work on phase 3 (hybrid system) is just about to start Work on phase 4 (3D-VAR NEMO) is being discussed and is like to start Q2 2007

Slide 13 NEMOVAR-LEFE Workshop 22/ Slide 13 Some outstanding issues for discussion We have not yet considered quality control of the observations: -Some checks could be done between the first outer loop and the first minimization -Other checks could be done before the first outer loop How to best implement the change of resolutions between outer and inner loops: -Straight interpolation? -Something more advanced which better preserves the physical quantities of the ocean fields?