Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sea Surface Temperature, Forward Modeling,

Similar presentations


Presentation on theme: "Sea Surface Temperature, Forward Modeling,"— Presentation transcript:

1 Sea Surface Temperature, Forward Modeling,
Bias Correction (and Cloud Detection…) Andy Harris, CICS, University of Maryland Rationale Radiative transfer methods Physical bias corrections Bayesian cloud detection

2 Some concepts SST is traditionally retrieved using regression-based equations with in situ data used to train May get biases in remote regions with sparse/no in situ data Also results in least confidence where satellite data have most impact New physically-based retrieval methodologies are reliant on accurate calibration and characterization SST is key environmental data record, and also comparatively easy to validate Pursue physically-based methodologies to: Improve SST retrieval capability Develop techniques to identify & quantify instrumental calibration and characterization errors post-launch Feed back results to improve forward modeling for CRTM “Substantial investment” means that it costs a lot of $$ if we spend a couple of years trying to fix problems post-launch. Main concept is to develop tools that will identify and quantify various instrument (and RTM) errors post-launch.

3 Observed and RT-modeled SST biases
Fixed viewing geometry of GOES emphasizes that single “global” linear retrieval equation is regionally sub-optimal Bias pattern for GOES-W similar to that predicted by radiative transfer Example of RT’s ability to predict regional biases - regionally sub-optimal retrievals in geostationary SSTs (GOES E,W boundary). Note, top-right plot includes GOES-E & W whereas simulation is only for GOES-W, so have to ignore eastern half of plot when comparing. Increased delta with respect to TMI (MW) when looking toward warm pool (E,W); increased atmospheric moisture in conjunction with scan angle. Decreased delta with respect TMI (MW) when looking poleward; drier atmosphere. This behavior is predicted by radiative transfer modeling (see 2nd figure). It is biases like these that we need to remove prior to assimilation – hence the desirability of using radiative transfer rather than direct regression against in situ SST measurements. RTM is a good way of predicting and removing such biases on a global basis prior to assimilation

4 Impact of restricted training data
January: regional bias in retrieved SST (restricted matchups) January: bias change due to restricted matchups January: regional bias in retrieved SST (all matchups) July: bias in retrieved SST (restricted matchups) July: bias change due to restricted matchups July: bias in retrieved SST (all matchups) Based on simulations for 37 degrees SZA (deemed a suitable average), using ERA profiles (including air-sea temperature differences). Profiles were obtained on a regular lat/long grid alternately for 0Z and 12Z for four days (1st 7th 15th 21st), giving about 350 total for a month. “Restricted” dataset ignores all profiles below 25 South. NLSST retrieval algorithms were generated for both the full and restricted profile sets and then applied back to the full set. Note that there are significant biases from “truth”, but with only 350 profiles, these cannot be considered fully representative. More importantly, there is a difference in regional bias due to the restriction in training data (e.g. representative of in situ data in 1980’s) which will gradually disappear as the buoy network becomes more extensive in the 1990’s, leading to an erroneous temperature trend. Pattern of regional bias change is primarily in the high latitudes (where early signals of climate change are anticipated) and varies seasonally – this may cause problems for “climate fingerprinting” change detection and attribution techniques.

5 RTM improvements: GOES-9 Case Study
Unusually large scatter and warm bias at low atmospheric corrections may be due to diurnal warming Nighttime retrievals also show small trend vs atmospheric correction Updated RT model removes most of the trend Perhaps note that the GOES-9 Imager is an old sensor so these results are pretty good for something launched a decade ago. Note that nighttime 2-channel has quite good S.D. (~0.6 K) and bias around -0.2 K (same as triple-window algorithm), so increased daytime scatter & bias are due to diurnal warming rather than actual errors in retrieved SST. **NOTE**, there isn’t a text bullet to accompany the last plot on this slide! Round up this slide by saying that the diurnal warming issue must be tackled, for a number of reasons. This leads into the next slide. Application of daytime coefficients to nighttime data gives small –ve bias (expected)

6 Diurnal Cycle using ERA-40 fluxes
January 2001 July 2001 This was modeled using the simplified parameterization of Gentemann et al. (2003). Need to be able to model surface effects in order to both avoid misinterpreting retrieval error (especially with geostationary sensors which do not vary their geographic sampling) and to provide a conversion of radiometric skin temperature to bulk SST prior to assimilation into mixed layer models. Prominent seasonal cycle – effect must be taken into account to ensure bias-free SSTs in important climatic regions

7 Impact of spectral response error on RT modeling
Impact is greater at high water vapor loadings Impact is greater at higher scan angles While top plot shows dependence on temperature, bottom plot is the key to identifying SRF error rather than calibration What follows is some examples of work already done on AVHRR. This plot shows the impact of an unaccounted-for -5 cm-1 error in the 12 micron spectral response. Note how the error (top plot, up to ~0.5 degK) is amplified by the retrieval equation (middle plot). Blue circles show result when correct spectral response is used, for comparison. Bottom plot shows how the effect is most strongly correlated with atmospheric deficit (hence spectral response function characterization), rather than just temperature (as shown in top plot) – top plot could be misinterpreted as result of calibration error.

8 Geographic impact of SRF shifts
Regional biases in NLSST retrievals NLSST retrieval difference (true – error) Regional biases if 12 µm SRF is shifted by -5 cm-1

9 Nighttime split-window
Results of perturbing NOAA & 12 µm spectral response functions Daytime split-window Nighttime split-window These are quite complicated plots. Note firstly that there isn’t much difference between top & bottom plots (this is a good thing). Next, you’re really looking to show how optimal results are obtained for a certain shift in the 11 & 12 um spectral response functions. In this case, optimal is determined by bias and gradient, rather than S.D.. Shift of approximately -5 wavenumbers for both 11 and 12 um response functions produces close to optimal results (bias ~-0.2, gradient of fit ~1.0). These results assume correct instrument calibration, which not guaranteed at this stage. However, the consistency between daytime and nighttime results gives significant confidence because both the cloud detection and range of atmospheric corrections differ between the two validation datasets.

10 Impact of adjusted spectral response
In practice, split-window retrieval will be replace by more sophisticated retrieval method Triple-window uses adjusted filters as determined by analysis of 11 and 12 µm data Plot shows SST retrieval errors for split-window and triple window: top are pre-correction and bottom are post-adjustment of spectral response functions. Plots show SST(buoy) – T11 vs SST(algorithm) – T11, i.e. “true” vs “predicted” atmospheric correction. I find this much more revealing than just SST(buoy) vs SST(algorithm), which is always highly correlated. Note, triple-window is improved but not optimized. Main retrieval weight is on 3.7 um channel, so improvements in 11 & 12 um are not sufficient to completely correct for residual errors (including calibration – which is assumed to be correct in this example).

11 Bayesian Cloud Mask: Along-Track Scanning Radiometer
11µm BT image Manually screened “truth” Bayesian Pclear >0.99 RAL threshold-based mask

12 Some results… Mask PP HR FAR TSS P-clear < 0.9 91.7 95.5 18.3 77.1
91.4 97.2 23.7 73.5 P-clear < 0.999 90.1 98.6 32.3 66.3 Standard RAL mask 88.4 96.3 32.2 64.1

13

14 Bayesian Cloud Detection
Increasing Probability of Clear-Sky Bias and scatter improve with increased Pclear N.B. “50% probability” is actually 80%. “100%” is actually “>99%”. Note improvement in scatter as pixels with lower “probability of clear-sky” are progressively excluded from the validation analysis. There is actually a residual error (~-0.2K) in GOES-12 nighttime SSTs (maybe RTM or calibration). Significant decrease in coverage as Pclear→ 1

15 Summary SST is key environmental data record, and also comparatively easy to validate – a powerful tool for diagnosing sensor performance Pursue physically-based methodologies to provide: Improved SST retrieval capability (inc. diurnal) Techniques to identify instrumental calibration and characterization post-launch (inc. historical sensors) Feed back results to improve forward modeling “Cost-effective” use of sparse in situ data (early – mid 80’s) is as independent sampling of retrieval envelope – do observed biases match modeled ones? Bayesian cloud detection a promising method for assigning quantitative errors to individual pixels HES spectra mean that it is not necessary to colocate ABI FOVs with NCEP analyses (but still useful to cross-compare). There’s also GIFTS, but this is not carried on a platform with an imager that is comparable to the ABI. MSG is closest current geo imager (in terms of spectral channels). AIRS & MODIS are a good LEO test-case. Quantitative error estimates for individual pixels are what’s needed for optimal assimilation of observations.


Download ppt "Sea Surface Temperature, Forward Modeling,"

Similar presentations


Ads by Google