Download presentation
Presentation is loading. Please wait.
Published byPatience Charles Modified over 8 years ago
1
Introduction to ST-VAL Gary Corlett
2
ST-VAL The ST_VAL TAG objectives are to – Establish and promote guidelines for satellite SST validation Coordinate discussions on validation techniques Draft a set of common guidelines – Objectively examine GHRSST-PP L2P data and to provide meaningful SSES for users Coordinate and homogenize the quality information in L2P between producersThe
3
Activities within STVAL The STVAL group’s activities are split into three areas: 1.Validation using in situ thermometry, including QC of in situ data Guidelines for producing match-ups (generation of MDB) Production of SSES Agree common guidelines for SSES 2.Validation using in situ radiometry, including Calibration traceability to standards Guidelines for producing match-ups Inter-comparisons 3.Validation using reference satellite sensor, including QC of reference sensor Methodologies
4
Directions for remotely sensed SST Away from empirical, towards physics-based Away from coefficients, towards formal inversion Sophisticated cloud detection and treatment of aerosols Resolve (in time) sub-daily variability (diurnal cycle) Decreasing uncertainties in satellite SST estimates - SD and regional bias Increasing scrutiny of drifting buoy and other in situ SSTs
5
Reference datasets for satellite SST validation Ship-borne radiometers – Traceable to SI; SST-skin; high accuracy; very-poor coverage Argo 3-5 m – Global; acceptable sampling; very-high accuracy (calibration method to be analysed) Drifting buoys – Unknown calibration; global data; SST-depth; good (but variable coverage) GTMBA – Better calibration; SST-1m; acceptable coverage (influenced by data collection) Coastal moorings – Questionable uncertainty; tough areas to validate VOS and VOSclim – Generally poor coverage; very high uncertainty on single sample
6
Relative errors of satellites and drifting buoy SST AATSR D3 SSTs are the “best” satellite SSTs available and are ±0.13 K (O’Carroll et al.) AVHRR split window will soon give ±0.22 K operationally at M-F (Merchant et al.) Drifting buoys (after QC or using robust statistics) seem to give ±0.2 K “Received wisdom”: buoy thermistors should give ±0.1 K “off the shelf” – Optimistic? Beginning-of-life value? – Rounding to 0.1 K – Point measured at depth being used for 1 km pixel Contribution from geophysical variability? Would we see any difference if buoy calibration were improved?
7
Interaction with in situ providers Recommendations to DBCP – Now deploying drifters with improved reporting – http://www.ghrsst.org/DBCP-GHRSST-Pilot- Project.html http://www.ghrsst.org/DBCP-GHRSST-Pilot- Project.html – Now looking at calibration JCOMM SOT – Interact on future requirements – Action to provide validation results as a function of measurement type Argo – Request for un-pumped near-surface profiles
8
Issues to be addressed How to deal with the SST skin to SST depth Need to deal with time difference Agreed mean drifter depth Which statistics to report Alternate SSES using Level 1b and retrieval uncertainties The evaluation of Argo as a reference source A long-term reference dataset for validation of L2 and higher products
9
Which Statistics? AATSR V2.0 D3 retrievals versus drifting buoys Normal – Mean:-0.09 K – St.Dev.:0.49 K 3-sigma – Mean:-0.09 K – St. Dev.:0.35 K Robust – Median:-0.08 K – Rob.Sig.:0.33 K Gaussian fit – Mean:-0.07 K – St.Dev.:0.27 K
10
AATSR v2.0 Skin effect
11
To skin or not to skin… All IR radiometers are sensitive to the skin SST – Even if retrieval provides sub-skin! Common agreement for skin to sub-skin (buoy depth) adjustment needed – 0.2 K; 0.17 K; 6 ms-1; Craig’s model; etc… – Agreement to ignore DV for SSES
12
GDS 2.0 SSES (1) Interoperability of many GHRSST data sources provides optimum scientific return Requires uniform method for uncertainty estimation across all data sources Common principles for SSES agreed at GHRSST X and revised at GHRSST XI – Will be maintained on GHRSST website and NOT in GDS 2.0 documentation
13
GDS 2.0 SSES (2) SSES must – Comprise bias and standard deviation relative to agreed reference source Quality indicator following QA4EO guidelines – Supported by a quality level flag – Be defined according to the SSES Common Principles Maintained on GHRSST website – Be documented and traceable Maintained on GHRSST website
14
GDS 2.0 SSES (3) SSTs should be the best estimate prior to SSES production – Responsibility of the SST producer – SSES are for users NOT for producers Common scale for quality level – Scale of 2 (worst quality) to 5 (best quality) – Clearly defined for each producer – Derivation of quality indicator to be traceable, i.e. documented and available to users.
15
SSES Common Principles (1) Content: – A bias (not a correction term) and a standard deviation reflecting the local accuracy (ideally at pixel) of the SST estimate – Application of SSES is consistent with the product definition (skin; sub-skin) At present the reference is drifting buoys – By convention (only really global source)
16
SSES Common Principles (2) Hierarchical references can be used – Global stats to DRIFTING BUOYS – Regional stats using other reference sources Radiometers GTMBA L4 analyses – PMW only Use of common match-up thresholds – Centre pixel clear; +/- 2 hrs (ideally 30 mins)
17
SSES Common Principles (3) Continuous fields preferred – No discontinuities between Quality Levels – Discontinuities may be inevitable SSES must be free from diurnal variability – Ideally estimated from night time match-ups L2P producers that provide SST-skin should use, as a minimum, a constant offset of 0.17 K to adjust SST- skin to SST-sub-skin for SSES production. – If sufficiently accurate wind-speed data is available then L2P producers are encouraged to allow for the wind speed dependence of the skin to sub-skin adjustment.
18
SSES Progress Good progress is being made towards providing uniform SSES across all IR products – Based on presentations in Lima Some incompatibilities with the SSES common principles remain across most products. – Diurnal variability; match-up limits Further iterations and refinements are being sought.
19
Actions Pierre Le Borgne – To add Bob Evans, Nick Rayner, Dick Reynolds and Gary Wick to the MF buoy blacklist mailing list Bob Evans – To provide details of extra QC steps done to buoy data prior to ingestion into MODIS MDB.
20
Tasks The exchange of buoy black lists between groups An investigation into the impact of varying QC approaches Separate NRT activities/requirements from offline/CDR requirements. Include moored buoys to QC procedures Assess current buoy coverage and identify areas where additional data are needed for SSES. Consult widely with buoy providers to identify non- GTS historical buoy data, starting with latest ICOADS release.
21
Tasks Continued refinement and adoption of the SSES common principles Provision of documentation for the website and user manual Peer-review of SSES schemes
22
Documentation Still an issue – Some schemes have no documentation at all Need to provide guidance for users – For website and user manual SSES schemes should ideally be peer-reviewed
23
Breakout activities L2P producers: brief summary of SSES scheme for user manual and website – Derivation, application, limitations, examples etc. Future validation – What is required? Multi-sensor match-ups – Benefits for uncertainty estimation The issues list, current SSES approach Any others?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.