Download presentation
Presentation is loading. Please wait.
Published byEmerald Fields Modified over 9 years ago
1
IGST XIII – Washington 2/06/2008 Intercomparisons Working Groupe activities Status of the intercomparison exercice Some exemples of diagnostics based on Class 1/2 Prepared by F. Hernandez K. Lisaeter, L. Bertino, F. Davidson, M. Kamachi, G. Brassington, P. Oke, A. Schiller, C. Maes, J. Cummings, E. Chassignet, H. Hulburt, P. Hacker, J. Siddorn, M. Martin, S. Dobricic, C. Regnier, L. Crosnier, N. Verbrugge, M. Drévillon, J-M Lellouche
2
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice Methodology decided: –Compare operational/dedicated hindcast from Feb-March-April period. –Consistency and quality assessment (not performance) –Intercomparison based on Class 1 and Class 2 metrics, and reference data –Files shared on OpenDap/FTP, assessment performed by different teams on dedicated ocean basins Preliminary work performed: –Intercomparison plan endorsed –Technical implementation documents (metrics definition) written and distributed
3
IGST XIII – Washington 2/06/2008 The validation « philosophy » Basic principles. Defined for ocean hindcast and forecast (Le Provost 2002, MERSEA Strand 1): –Consistency: verifying that the system outputs are consistent with the current knowledge of the ocean circulation and climatologies –Quality (or accuracy of the hindcast) quantifying the differences between the system “best results” (analysis) and the sea truth, as estimated from observations, preferably using independent observations (not assimilated). –Performance (or accuracy of the forecast): quantifying the short term forecast capacity of each system, i.e. Answering the questions “do we perform better than persistency? better than climatology?… A complementary principal, to verify the interest for the customer (Pinardi and Tonani, 2005, MFS): –Benefit: end-user assessment of which quality level has to be reached before the product is useful for an application
4
IGST XIII – Washington 2/06/2008 Metrics definition (MERSEA heritage) CLASS1 like : Regular grid and few depth, daily averaged Comparison of the 2D model surface SST and SLA with -SST -SLA -SSM/I Ice concentration and drift for Arctic and Baltic areas Comparison of each model (T,S) with climatological (T,S, mixed layer depth) at several depth (0m, 100m, 500m, 1000m )? CLASS2 like: High resolution vertical sections and moorings Comparison of the model sections with Climatology and WOCE/CLIVAR/OTHER/XBT hydrographic sections Comparison of the model SLA at tide gauge location, of the model (T,S,U,V) at fixed mooring locations CLASS3 like: Physical quantities derived from model variables Comparison of the model volume transport with available observations (Florida cable measurments….) Assessment through integrated/derived quantities: Meridional Overturning Circulation, Warm Water Heat Content etc…. CLASS4 like: Assessment of forecasting capabilities Comparison between climatology, forecast, hindcast, analysis and observations Comparison in 15x15degree boxes/dedicated boxes of each model with T/S CORIOLIS, SSM/I Sea Ice concentration, tide gauges SST High resolution ? SLA AVISO ?
5
IGST XIII – Washington 2/06/2008 Class 2/3: MERSEA/GODAE GLOBAL METRICS: Online Systematic Diagnostics XBT linesMOORINGS GLOSS TAO PIRATA Model/Tide gauge SLA time series Comparison OceanSITES moorings SOOP MFS MODEL T XBT Observed T MODEL/OBS comparison WOCE CLIVAR CANADIAN SECTIONS MODEL/WOCE-CLIVAR SECTION VOLUME TRANSPORT across FLORIDA Strait : MODEL/CABLE Comparison SECTIONS and TRANSPORT
6
IGST XIII – Washington 2/06/2008 Compute Class4 statistics per geographical boxes or in regular 5x5degree boxes per vertical layers (0-100m, 100-500m, 500-5000m?) Elementary box patchwork
7
IGST XIII – Washington 2/06/2008 Class 4 based on Sea-Ice in the Barents Sea TOPAZ sea-ice vs SSM/I data. RMS of the ice concentration error (model-observation) over a box in the Arctic Ocean. Analysis is compared to forecast and persistence over a 10-day window
8
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: definition of metrics New description of Class 1, Class 2 and Class 3 metrics: –Regional areas revisited to fit recommendations' –Complete description of mooring, sections etc… –Up-grade NetCDF files definition to be consistent with COARDS CF1.2 conventions –Include sea-ice variables in the definitions –Saving half storage capacity by use of “compressed” NetCDF files (data written “short” instead of “floats”, using “scale_factors”) –Proposition of a set of reference data (availability, access)
9
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: definition of metrics Class 1 definition (provided with fortran programs): 0305010020040070010001500200025003000
10
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: definition of metrics Class 1 definition (provided with fortran programs): –2D fields: The zonal and meridional wind stress (Pa) on top of the ocean, The total net heat flux (including relaxation term) (W/m2) into the sea water, The surface solar heat (W/m2) into the sea water, The freshwater flux (including relaxation term) (kg/m2/s) into the ocean, The Mixed Layer Depth (henceforth MLD) (m). Two kinds of MLD diagnostics are provided, to be compliant with [de Boyer Montégut et al., 2004] and [D'Ortenzio et al., 2005]. A temperature criteria MLD(θ) with temperature difference with the ocean surface of T=0.2°C. And a surface potential density criteria MLD(ρ) with a 0.03 kg/m3 surface potential density criteria. The Sea Surface Height (SSH) (m). –3D fields: The potential temperature (K) and salinity (psu). The zonal and meridional velocity fields (m/s). The vertical eddy diffusivity (kz, in m2/s): if compressed, first in LOG10!
11
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: definition of metrics Class 1 definition (provided with fortran programs): –2D fields (for ARC, ACC, NAT, NPA and GLO): Sea-Ice thickness (m) Sea-Ice concentration (fraction) Sea-Ice x and y velocities (m/s) Surface snow thickness over sea ice (m) Sea ice downward x and y stress (Pa) Tendency of sea ice thickness due to thermodynamics (m/s) Surface downward heat flux in air (W/m2) –Ancillary data: The Mean Dynamic Topography (henceforth MDT) (m) used as a reference sea level during the assimilation procedure. MDT is also called Mean Sea Surface Height (MSSH). Climatologies of Sea Surface Temperature (SST) (K), of surface current (m/s), of MLD (m). Climatology of potential temperature (K) and salinity (psu) fields from (T,S) used as a reference.
12
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: definition of metrics Class 2 mooring/sections –potential temperature (K) and salinity (psu). –zonal and meridional velocity fields (m/s). –Sea Surface Height (SSH) (m).
13
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: definition of metrics 78 vertical levels (WOA and GDEM3.0 standard levels straight sections (yellow); XBT sections (brown); gliders sections (purple); tide gauges (blue), and other moorings (red).
14
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: definition of metrics Class 3 definition (transport): In black, sections without specific class of computation on the vertical. Transport computed with classes: temperature (red), salinity (yellow), density (blue) and depth (green).
15
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: assessment through Class 1-2-3 metrics Consistency: Monthly averaged fields compared to: –WOA’2005, Hydrobase, CARS, MEDATLAS, Janssen, climatologies –De Boyet Montégut MLD climatology –SST climatology Quality: Daily fields compared to –In situ data (Coriolis data server) –Dynamic topography, or SLA (AVISO products) –SST (depending on groups) –SSM/I Sea-Ice concentration and drift products –Surface currents (DBCP data, OSCAR, SURCOUF products)
16
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: Where are we ? Partners involved / status: Status of 3 months hindcast products OpenDAP or FTP addresses Partner’s main regions of interest MERProducedTBCNAT, TAT, SPA, TPA, IND, ARC UKMTBC tbc TOPTBC tbc BLKProducedBlueLink OpenDAP : TBCIND, SPA, tbc MFSTBC tbc MRIProduced at MRI, provided to University of Hawaii Univ of Hawaii OpenDAP : TBC NPA HYCTBC tbc CNFProduced, need to transition to new NetCDF format FTP server of CNOOFSNorth West Atlantic
17
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: Where are we ? Agenda: –Shift (one month now) for availability of products –Not clear view of “intercomparison strength of work” in the different areas (ie how many groups plan a dedicated work looking at more than their own hindcast? ) –Target: define a deadline to be prepared for the Symposium Validation and intercomparison of analysis and forecast products F. Hernandez (Mercator-Ocean), G. Brassington (BoM), J. Cummings (NRL), L. Crosnier (Mercator-Ocean), F. Davidson (DFO), S. Dobricic (ICMCC), P. Hacker (Univ. of Hawaii), M. Kamachi (JMA), K. A. Lisæter (NERSC), M. Martin (UK Met Office) Availability of products (end of July ?????) Availability of intercomparison results (mid October ????) –Managing the outcomes: How do we take profit from feedbacks ? Initiative to keep on this activity?
18
IGST XIII – Washington 2/06/2008 Assessment diagnostics SST SST-WOA05 NOAA RTG SST SST - RTG
19
IGST XIII – Washington 2/06/2008 Assessment diagnostics
20
IGST XIII – Washington 2/06/2008 Assessment diagnostics Surface currents comparison to drifters Salinity-WOA05 Salinity
21
IGST XIII – Washington 2/06/2008 Assessment diagnostics
22
IGST XIII – Washington 2/06/2008 Assessment diagnostics
23
IGST XIII – Washington 2/06/2008 Assessment diagnostics
24
IGST XIII – Washington 2/06/2008 Assessment diagnostics
25
IGST XIII – Washington 2/06/2008 Assessment diagnostics
26
IGST XIII – Washington 2/06/2008 Assessment diagnostics
27
IGST XIII – Washington 2/06/2008 Status of the intercomparison exercice: Where are we ? Agenda: –Shift (one month now) for availability of products –Not clear view of “intercomparison strength of work” in the different areas (ie how many groups plan a dedicated work looking at more than their own hindcast? ) –Target: define a deadline to be prepared for the Symposium Validation and intercomparison of analysis and forecast products F. Hernandez (Mercator-Ocean), G. Brassington (BoM), J. Cummings (NRL), L. Crosnier (Mercator-Ocean), F. Davidson (DFO), S. Dobricic (ICMCC), P. Hacker (Univ. of Hawaii), M. Kamachi (JMA), K. A. Lisæter (NERSC), M. Martin (UK Met Office) Availability of products (end of July ?????) Availability of intercomparison results (mid October ????) –Managing the outcomes: How do we take profit from feedbacks ? Initiative to keep on this activity?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.