Presentation is loading. Please wait.

Presentation is loading. Please wait.

GODAE OceanView Intercomparison Task Team Fabrice Hernandez and Matt Martin.

Similar presentations


Presentation on theme: "GODAE OceanView Intercomparison Task Team Fabrice Hernandez and Matt Martin."— Presentation transcript:

1 GODAE OceanView Intercomparison Task Team Fabrice Hernandez and Matt Martin

2 Contents Historical background: Cal/Val activities during GODAE Brief overview of existing metrics definitions Results from final GODAE intercomparison Issues coming out of that intercomparison DISCUSSION:  Objectives of the Intercomparison Task Team  Main areas to focus future effort

3 Historical background: 10 years of cal/val activities in the framework of GODAE 20012003200420082009 MERSEA IP 6 months TOP1 6 months TOP2 MERSEA Strand 1 6 months MERSEA Interc. GODAEGODAE OceanView North Atlantic Med European Seas Global 3 months GODAE Interc. Ocean basins Global FOAM TOPAZ MFS MERCATOR HYCOM MyOcean FOAM TOPAZ MFS MERCATOR DMI-Baltic FOAM TOPAZ MERCATOR HYCOM MOVE/MRI BLUElink> C-NOOFS European Seas Global NWS- FOAM ARC- TOPAZ MED- MFS GLO- MERCATOR BAL- DMI Black Sea IBI NetCDF COARDS-CF OPENDAP/LAS NetCDF3 Class 1 ATL Class 2 ATL Class 3 ATL Class 1 glo Class 2 glo Class 3 glo Class 1 sea-ice Class 2 sea-ice Class 4 T/S Class 4 sea-ice Class 1 new Class 2 new

4 GODAE metrics definitions Class 1 – daily average model fields interpolated onto pre-defined grids ( eddy- permitting view ) on specified levels Class 2 – model fields interpolated to pre-defined mooring locations and sections. Class 3 – transports through sections and other integrated quantities such as Meridional Overturning Streamfunction and heat transports. Class 4 – assessment of forecasting capabilities through comparison of model with assimilated and independent observations 3000250020001500100070040020010050300

5 Final GODAE intercomparison Design of the Intercomparison experiment (~2006-2007):  Extending MERSEA Class 1 and Class 2 metrics to global scale  Definition of intercomparison objectives: a) Demonstrate GODAE operational systems in operations b) Share expertise and design validation tools and metrics endorsed by GODAE operational centers c) Evaluate the overall scientific quality of the GODAE operational systems  Implementation of metrics computation Demonstration phase: 3 month period (Feb, Mar, Apr 2008). Phase of synthesis: June 2008 to GODAE Final Meeting in Nov. 2008 Additional synthesis: until January 2009  GODAE metrics produced from various groups: BlueLink, FOAM, HYCOM, Mercator, MOVE/MRI,...  Data put on ftp servers and/or OpenDAP servers.  Most of the intercomparison results obtained so far have looked at monthly means and standard deviations against climatology or some other processed data (e.g. SST analyses)  No sea-ice results  Some intercomparisons (e.g. done by the Australians) have focussed more on comparison with assimilated and independent observations, e.g. SST, SLA, Argo and surface drifters, in the Indian Ocean and South Pacific regions.

6 GODAE systems in comparison Mercator NEMO ECMWF SEEK – RkF T,S, SLA, SST HYCOM FNOC-NOGAPS NCODA – MvOI T,S,(SLA), SST, ice FOAM NEMO UK-MetT, S, SLA, SST, ice BLUElink ind/spa MOM4 BoM BODAS – EnOI T,S,SLA,TG, SST TOPAZ nat/arc HYCOM ECMWF EN-kF T,S, SLAmaps, SST, ice MOVE/MRI npa MOVE JMA MRI - 3Dvar T,S, SLA, MG-SST C-NOOFS nw-nat NEMO Env. Canadano

7 FOAM HYCOM Mercator

8 Intercomparison in the TAT Snapshot SST comparison the 15th of February 2008 with respect to OSTIA. Numbers in brackets correspond to RMS differences in the box limited area in the Gulf of Guinée (15°W-5°E and 5°S-5°N), and the box limited area for the Northern Tropical Atlantic (55-15°W and 5-25°N), plotted for the OSTIA figure. Units +3/-3 in Kelvin. HYCOM (0.73 / 0.35) PSY2V3 (0.34 / 0.39) 0.340.41 0.730.30 0.350.41 0.500.39

9 TNA SAT OSTIA SST Std Feb-April 2008 HYCOM FOAM PSY3 PSY2 OSTIA HYCOM FOAM PSY3 PSY2 OSTIA 0.5°C Box averaged SST 0.5°C

10 Assessment of EKE in NAT TOPAZ FOAM HYCOM C-NOOFS SURCOUF Mercator

11 Monthly comparison in April’08

12

13 Final GODAE intercomparison: general scientific outcomes GODAE eddy-permitting systems are consistent (i.e. match qualitatively the climatology, general patterns of the ocean circulation) and there is no “bad” surprise Accuracy assessment reveals differences, biases, possible errors in model or assimilation schemes… These evidence are a first step for targeted corrections and improvements Impact of horizontal resolution is evidenced on kinetic energy levels. Further work need to be done to identify the causes of differences between the systems (e.g., impact of forcing, data assimilation schemes….)

14 Final GODAE intercomparison Successes of the intercomparison:  The hindcasts/forecasts were made available and easily accessible (and people were responsive if there were problems accessing the data).  Most/all of the work was done using the Class 1 fields. These fields were generally produced using the agreed definitions (at least close enough to make it relatively easy to use them).  The comparison of the Class 1 fields highlighted some interesting differences between the systems.  Visibility of this work through scientific communication and publication  Observations were made available by the observing community, involved now in operational oceanography, and supporting the ocean forecasting centers

15 Final GODAE intercomparison Shortcomings of the intercomparison:  Some systems produced the metrics in their normal operational setting, whereas others were re-run in hindcast mode.  Some groups upgrade their systems in the meantime of the Intercomparison synthesis  Some systems produced forecasts and others just analysis fields.  Class 1 still need some homogenisation  Class 2 and 3 metrics were produced by some systems but not all, and no comprehensive assessment of them was carried out.  Very little/no work done on Class 4 metrics intercomparison  Most GODAE partners used FTP rather than OPENDAP (not technically efficient)  A demonstration rather than a routine intercomparison.  A three month period is a really short period of time to overview ocean forecasting system behaviour and performance  It would have been useful to meet to present and discuss results: the calendar was very tight  Several groups had problems to fully contribute to the exercise, and human resources dedicated to intercomparison synthesis were not available in all forecasting centres.

16 Discussion and outlook 1. What is the role of a validation/intercomparison Task Team? 2. Strategy of the validation/intercomparison Task Team 3. Clarify workplan of Intercomp/Val TT and interactions with: OSE/OSSE TT Coastal and Shelf Seas TT Biogeochemical TT ET-OOFS WCRP-CAS WGNE

17 1. What is the role of a validation/intercomparison Task Team? Core scientific activity: develop metrics, share experience, evidence differences, cross-fertilized ideas among GODAE centers Be consistent as a group in our monitoring policy: provide tool for monitoring routinely the systems, and controlling inputs (e.g. link with data providers), and outputs (mandatory link with users)  Main benefit: improvements of the systems, and the quality of products Provide visibility as GODAE community: demonstration, publications…

18 2. Strategy of the validation/intercomparison Task Team Rely on new targeted intercomparison exercices ? Establish permanent monitoring among the OOFS? Expect outcomes from regional activities (e.g. MyOcean, US Navy)?

19 2. Strategy of the validation/intercomparison Task Team Suggested scientific aspects that should be addressed Extend comparison with other set of independent observations, e.g. ocean colour, surface drifters Assessing the performance of the data assimilation using observation-minus- background and observation-minus-analysis statistics:  Useful to show accuracy of short-range forecasts and the performance of the assimilation.  Different analysis time-windows and operational schedules make it difficult to intercompare (o-b) between systems. Assessing the performance of the model through estimates of forecast skill:  Anomaly correlations and RMS differences between forecasts and analyses (and between forecasts and observations). Multi-model ensemble statistics  provide error levels and monitoring tools Design user oriented metrics for targeted applications (ocean climate monitoring, oil spill, S&R…)  Demonstration of routine validation activity Internal External

20 3. Clarify workplan of Intercomp/Val TT and interactions Take into account new OOFS (NCEP, MFS, China…) Diagnostics that allow the characterisation of biases, long term changes (link with GSOP) Link with coastal validation  Assessing the accuracy/impact of IC/BC (downscaling)  Share scientific assessment methodology Link with biogeochemistry validation  Assessing the accuracy/impact of the physical variables (vertical diffusion, coupling)  Share scientific assessment methodology Link with OSE/OSSE TT (characterize the impact of incoming data, feedbacks to relevant data providers) :  Develop common metrics for both validation and data impact assessment

21 Suggested plan for the coming months Use of existing Feb-Mar-April 2008 dataset:  Extended scientific validation ?  Inform which metrics should be integrating a possible routine monitoring (daily NRT production) Discuss future implementation depending on chosen strategy  Prepare workplan : roadmap document By end of september 2009 Review by OOFS Beginning of implementation in 2010 Prepare calendar for meetings/discussions  Topics to be addressed in the roadmap: Discuss technical aspect of NRT production (storage and exchange): possible link with ET-OOFS focus on a sub-set of useful metrics


Download ppt "GODAE OceanView Intercomparison Task Team Fabrice Hernandez and Matt Martin."

Similar presentations


Ads by Google