Validation of MTSAT-1R SST for the TWP+ Experiment Leon Majewski 1, Jon Mittaz 2, George Kruger 1,3, Helen Beggs 3, Andy Harris 2, Sandra Castro 4 1 Observations & Engineering Branch, Bureau of Meteorology, Australia 2 Earth System Science Interdisciplinary Center, University of Maryland, USA 3 CAWCR, Bureau of Meteorology, Australia 4 University of Colorado, USA
MTSAT-1R −LRIT June 2005-June 2006 −Cross-talk correction April 2006 −HRIT June 2006-June 2010 Cloud Clearing Approach GBCS Library (U. Edinburgh) Modified by U. MD & NOAA Uses CRTM Library NWP: GASP start-late 2009 NWP: ACCESS late 2009-present Sea Surface Temperature Approach Regression against drifting buoys Option of physical retrieval Sea Surface Temperature from MTSAT NOAA GBCS CRTM McIDAS ghrsst sses netCDF L2P HRIT ADDE MODEL ICE GAMSSA MDB AREA L3U Subset: TWP+ Blacklist
Sea Surface Temperature from MTSAT Matchup database SST based on regression requires a matchup database MDB rules are important: Observations must be −within 1 hour of each other −co-located: within 4-10 km of each other −over 10 km from cloud −not blacklisted (Meteo France) Error statistics (SSES) Bias and Standard Deviation for each quality level / proximity conf. in situ observations within a 30 day window prior to sat observation Mapping mapx ( Nearest neighbour to preserve link to observations
Sea Surface Temperature from MTSAT Algorithm development using drifting buoys ( ) Period: 15 July 2006 – 01 June 2008 Location: 60S – 60N, 100E-160W Day: Night: a0a0 a1a1 a2a2 a3a3 a4a4 a5a5 a6a6 Day Night
Validation of MTSAT SST Product validation using drifting buoys ( ) Period: 15 July 2006 – June 2010 Region: 60S – 60N, 100E-160W Night: Bias: , St. Dev: (N=96572) Day: Bias: , St. Dev: (N=56981) Three-way Comparison (AVHRR, Buoy, MTSAT-1R) Simple statistics can hide some complex issues Performance is not uniform – spatially/temporally
Morning Local West of 100E ~ -1.5 K
Afternoon Local West of 100E ~ -1.5 K
Night Local West of 100E ~ -0.5 K
Night Local West of 100E ~ -0.2 K
Validation of MTSAT SST Performance in the TWP+ domain Night: Bias: , St. Dev: 0.468, Robust St. Dev: Local time 19 – 06; Sun zenith > 100; Sensor zenith < points Day: Bias: , St. Dev: 0.749, Robust St. Dev: Local time 08 – 17; Sun zenith < 80; Sensor zenith < points
Validation of MTSAT SST Performance in the TWP+ domain Average difference from analysis 4 month period East-West bias Bias: K/Deg. Longitude About 0.1 K over TWP domain Smaller than expected DV signals What is causing this? Not GAMSSA Probably calibration
MTSAT-1R Calibration Calibration issues Difference from CRTM, , 30S-30N
How fit for purpose is MTSAT? Assessment of satellite sea surface temperature products Characterisation of observed diurnal warm-layer events Assessment of diurnal warming models Tropical Warm Pool Experiment MTSAT-1R RAMSSA (MTSAT-1R – RAMSSA)
How fit for purpose is MTSAT? Tropical Warm Pool Experiment (MTSAT-1R – RAMSSA)
Observations Acceptable standard deviation ( K), small cold bias ( K) Variability is greater than the uncertainty −Diurnal variability, < 0.5K, may be hidden by noise/calibration Tropical Warm Pool Experiment
Observations Acceptable standard deviation ( K), small cold bias ( K) Variability is greater than the uncertainty −Diurnal variability, < 0.5K, may be hidden by noise/calibration −Diurnal warming, > 1K, can be observed Area of interest is usually small compared to spatial scales of calibration issues Algorithm differences (day/night) can be problematic Tropical Warm Pool Experiment 1K Colder than RAMSSA
Summary MTSAT-1R MTSAT-1R SST JAMI (MTSAT-1R) provides hourly observations of SST Acceptable standard deviation ( K), small cold bias ( K) Temporal and spatial variability due to calibration −Not perfect; Be aware of limitations MTSAT-1R SST can be used for diurnal warming studies Diurnal warming can be detected using analysis fields −False positives occur where analysis is cloud affected Has been used to test simplified models and parameterizations for diurnal warming – see Castro et. al. Future Developments Calibration – discussions this week New software/physical retrieval Still need methods to improve cloud screening
Statistics: TWP+ Region Note that the moored and drifting buoys have a similar bias signal, but the moored buoys appear to be cold by ~0.12K when compared to drifting stats – I’ll have to double check this with co-located drifters
Statistics: TWP+ Region HourBiasSt DevHourBiasSt Dev Reasonably consistent cold bias: K. Reasonably consistent standard deviation: 0.45 (night) 0.7 (day)
Statistics: TWP+ Region Use moored buoys for verification: Day: Bias: , St. Dev: 0.801, Robust St. Dev: Local time 08 – 17; Sun zenith < 80; Sensor zenith < points Night: Bias: , St. Dev: 0.423, Robust St. Dev: Local time 19 – 06; Sun zenith > 100; Sensor zenith < points Reasonably consistent standard deviation: 0.45 (night) 0.7 K (day) All the moored stats look similar so we can clump them together for an hour-by-hour analysis The bias moves around with local time If we use the day time algorithm at night, we have a similar level of bias (though positive), but the st dev increases to ~1K… but would provide a bit more consistency. I think I’ll provide both methods in the future