Download presentation
Presentation is loading. Please wait.
Published byJody Greene Modified over 9 years ago
1
© Crown copyright Met Office IQuOD Workshop 2, 5 th June 2014 Chris Atkinson, Simon Good and Nick Rayner Assigning Uncertainties to Individual Observations: Experience from Building HadIOD
2
© Crown copyright Met Office Presentation Overview HadIOD – Hadley Centre Integrated Ocean Database What is HadIOD? The HadIOD observation error model Allocating uncertainties to sub-surface profile observations How well do we know sub-surface observation uncertainties? What next? Conclusions
3
© Crown copyright Met Office What is HadIOD? HadIOD is the Met Office Hadley Centre Integrated Ocean Database. It was created as part of the EU-FP7 project ERA-CLIM (European Reanalysis of Global Climate Observations), an aim of which was to prepare the observations for future fully-coupled ocean-atmosphere reanalyses. It is a relational database of ocean temperature and salinity observations, covering the period 1900-2010. The database is ‘integrated’, in that it brings together surface and sub- surface components of the ocean observing system, which have traditionally been treated separately for climate monitoring purposes, but are both needed for coupled reanalysis. ERA-CLIM ended in 2013, but development of HadIOD.1.0.0.0 will be continued as part of the follow on project ERA-CLIM2 (2014-2016).
4
© Crown copyright Met Office What is HadIOD? At present, HadIOD brings together surface temperature observations from ICOADS release 2.5 and sub-surface temperature and salinity profile observations from EN4. An observation is a measurement of a single geophysical variable at some time, position and depth. Currently there are ~1.2 billion observations in HadIOD. From Atkinson et. al., submitted J. Geophys. Res
5
© Crown copyright Met Office What is HadIOD? HadIOD comprises basic observation information such as: - platform ID, position, time, depth, platform & instrument type, observed temperature & salinity, provenance information, unique ID Each observation also receives quality flags: - ICOADS surface obs receive Met Office Hadley Centre basic QC flags - EN4 sub-surface obs preserve EN4 QC flags - duplicate flag for duplicates introduced by merging ICOADS and EN4 For assimilation, reanalyses want unbiased observations with an estimate of random measurement uncertainty, and so observations in HadIOD are also allocated bias corrections (where available) and uncertainties.
6
© Crown copyright Met Office The error model used for assigning b ias corrections and uncertainties to individual observations is as follows: observation value = true value + platform-type bias + platform-specific bias + random measurement error This is the error model used by HadSST3, the latest Met Office Hadley Centre dataset of bias corrected, gridded in situ SST anomalies HadSST3 has a sophisticated treatment of observation error The HadIOD observation error model
7
© Crown copyright Met Office The platform-type bias applies at the level of platform-type, e.g. for measurements made by XBTs or ship buckets The platform-specific bias applies at the individual platform level e.g. a bias in the measurements from a particular ship or float Bias corrections are (where possible) assigned to observations, which can be added to the observed value to correct for observational biases The HadIOD observation error model An example of drifter SST discrepancies (pink) relative to the satellite-based analysis OSTIA (blue) and ATSR (green). Atkinson et. al., J. Geophys. Res., 2013 Global average temperature anomaly for 0-220 m. Each time series has had a different set of XBT bias adjustments applied except the black line. The grey area is an estimate of the 95% confidence range of the unadjusted data due to uncertainties caused by the limited sampling. All time series are relative to a 1971-2000 climatology based on the data with the Levitus et al. (2009) bias adjustments applied. Courtesy S. Good.
8
© Crown copyright Met Office The error model used for assigning b ias corrections and uncertainties to individual observations is as follows: observation value = true value + platform-type bias + platform-specific bias + random measurement error Bias corrections are (where possible) assigned an associated bias correction uncertainty, which represents the uncertainty in our knowledge of the true correction value Observations are also assigned a random measurement uncertainty, which is the uncertainty due to an unknowable random error Reanalyses do not presently make use of correlated uncertainties and so a combined (correction & random) uncertainty is provided for this user The HadIOD observation error model
9
© Crown copyright Met Office The error model used for assigning b ias corrections and uncertainties to individual observations is as follows: observation value = true value + platform-type bias + platform-specific bias + random measurement error A further source of uncertainty is the ‘structural uncertainty’, which arises due to the choices made during dataset creation An example would be the different XBT bias correction schemes that are available, i.e. for each XBT observation, multiple realisations of the platform-type bias correction could be assigned (not yet done in HadIOD) These could be applied in turn to explore the structural uncertainty when bias correcting XBT observations The HadIOD observation error model
10
© Crown copyright Met Office For sub-surface profile observations in HadIOD, uncertainties assigned to observations are largely based on descriptions of measurement accuracy found in WOD 2013 documentation and the recent review by Abraham et al. [2014] of global ocean temperature observations These sources generally provide manufacturer product specification accuracies sorted by WOD probe-type (e.g. CTD, MBT, profiling float) and sensor type (YSI 46016 thermistor, Seabird-41 CTD) EN4 (the source of sub-surface observations in HadIOD) combines WOD, GTSPP, Argo GDAC and ASBO observations To assign uncertainties in HadIOD, non-WOD ‘probe-type’ metadata are first coerced to a WOD probe-type, and literature derived uncertainties then applied at (mostly) probe-type level Literature accuracies are treated as random measurement uncertainties One set of XBT/MBT bias corrections are also applied (an updated set of the Gouretski and Reseghetti [2010] corrections). Allocating uncertainties to sub-surface profile observations
11
© Crown copyright Met Office Allocating uncertainties to sub-surface profile observations From Atkinson et. al., submitted J. Geophys. Res
12
© Crown copyright Met Office Allocating uncertainties to sub-surface profile observations surface sub-surface From Atkinson et. al., submitted J. Geophys. Res
13
© Crown copyright Met Office Allocating uncertainties to sub-surface profile observations surface sub-surface From Atkinson et. al., submitted J. Geophys. Res
14
© Crown copyright Met Office Allocating uncertainties to sub-surface profile observations surface sub-surface From Atkinson et. al., submitted J. Geophys. Res
15
© Crown copyright Met Office What do literature sourced accuracies refer to? Is this a random measurement error uncertainty, or an uncertainty due to systematic measurement errors which is correlated somehow amongst observations (i.e. a bias), or both? For example, XBTs: -Typical literature sourced XBT accuracy is ~0.15°C; currently HadIOD treats this as an estimate of random measurement uncertainty -However, this seems, at least in part, to refer to uncertainties that are correlated amongst particular types of XBT (e.g. T-4, T-6, etc.) -This uncertainty of 0.15°C needs decomposing into its constituents (due to random and systematic effects) -When an XBT bias correction is provided, which intends to remove systematic error, an associated correction uncertainty should be given -For XBTs, the uncertainty in any depth correction also needs quantifying How well do we know sub-surface observation uncertainties?
16
© Crown copyright Met Office Sensor accuracies taken from the literature tend to be overly-optimistic relative to assessments of measurement error uncertainty in quality controlled data recovered from deployed instruments, for example: How well do we know sub-surface observation uncertainties? Instrument (measurand) Accuracy← SourceUncertainty← Source Drifting Buoy (sea surface temperature) 0.05°C 1σ-level accuracy recommended by WMO for operational drifting buoys (DBCP website documentation) ~ 0.2-0.7°C / 0.25°C Kennedy et al. [2013] (Table 2) review of in situ SST uncertainty / WMO guide to typical drifting buoy uncertainty (DBCP website documentation) GTMBA Moored Buoy (sea surface temperature) 0.02°C Next generation ATLAS mooring sensor specification (PMEL TAO website) 0.12°C Kennedy et al. [2011] assessment of tropical mooring measurement error uncertainty using AATSR Animal Mounted CTD (sub-surface temperature) 0.005°C Southern elephant seal Sea Mammal Research Unit CTD- SRDL (WOD13 documentation) 0.05°C Roquet et al. [2013] estimated accuracy of post-processed CTD-SRDL (built after 2007) seal measurements in MEOP- CTD database Argo CTD (salinity) 0.005 psu Seabird-41 CTD for ALACE float specifications (WOD13 documentation) ~0.01 psu? Thadathil et al. [2012] suggest biases up to ~0.02 psu in delayed mode Argo vs. CTD
17
© Crown copyright Met Office How can we refine our uncertainty estimates? - By further collating community expertise and knowledge - Difficult to do unilaterally! What next? Annual drift of SALD (representing salinity drift in float CTD) for the four selected floats with linear fits. Data points are plotted only for the years that have CTD matchups. From Thadathil et al. [2012], J. Atmos. Oceanic. Technol.
18
© Crown copyright Met Office How can we refine our uncertainty estimates? - By considering finer metadata subdivisions (e.g. Argo sensor type) - By considering less prevalent observation types if important at some time or place (e.g. bottle data, profiling drifting buoys) What next? From Atkinson et. al., submitted J. Geophys. Res Excerpt from WOD’13 instrument type code table: http://data.nodc.noaa.gov/woa/WOD13/CODES/TXT/ v_5_instrument.txt
19
© Crown copyright Met Office How can we refine our uncertainty estimates? - By inter-comparing different observation types (e.g. XBT ↔ CTD, surface ↔ near-surface profile obs, satellite ↔ near-surface profile obs) - Using feedback information from (re)analysis? What next? Distribution of Argo float and XBT temperatures (shallowest ob in depth range 4-6m) minus near-coincident drifting buoy temperatures, 2000-2009. Global temperature anomaly time series relative to the 2001-2010 average calculated for the sea surface (red) and the near surface (0-20m) layer (blue). Time series are obtained by area weighted averaging of 5x5-degree box anomalies. Inset maps show spatial distribution and number of observations/profiles for select time periods. From Gouretski et al. [2012], Geophys. Res. Let. 39:L19606.
20
© Crown copyright Met Office Quantifying and refining uncertainties in sub-surface observations will require significant efforts by the ocean observation community Efforts should probably be prioritised based on user requirements, but who are the users and what might their needs be? What next?
21
© Crown copyright Met Office Efforts should probably be prioritised based on user requirements, but who are the users and what might their needs be? What next? 1. Ocean model data assimilation (e.g. coupled and ocean reanalyses, monthly-to-decadal forecasting) → ERA-CLIM2 project may help Requirements? Unbiased observations with an estimate of random measurement uncertainty Priorities? Multiple realisations of sub-surface data that explore the uncertainties in existing bias correction schemes (structural and perhaps non-structural) and the creation of new bias correction schemes where needed Refined estimates of random uncertainty, provided that this is comparable in magnitude to representivity error
22
© Crown copyright Met Office Efforts should probably be prioritised based on user requirements, but who are the users and what might their needs be? What next? 2. Climate monitoring Requirements? Multi-decadal time series of key climate variables with bias adjustments as appropriate and carefully quantified uncertainties Priorities? Multiple realisations of sub-surface data that explore the uncertainties in existing bias correction schemes (structural and perhaps non-structural) and the creation of new bias correction schemes where needed Decomposed and refined estimates of uncertainty terms, including correlated uncertainties
23
© Crown copyright Met Office Efforts should probably be prioritised based on user requirements, but who are the users and what might their needs be? What next? 2. Climate monitoring Global average temperature anomaly for 0-220 m. Each time series has had a different set of XBT bias adjustments applied except the black line. The grey area is an estimate of the 95% confidence range of the unadjusted data due to uncertainties caused by the limited sampling. All time series are relative to a 1971-2000 climatology based on the data with the Levitus et al. (2009) bias adjustments applied. Courtesy S.Good Timeseries of estimated uncertainties arrising from various sources in global annual sea surface temperature area average (from Figure 8 of Kennedy [2013], Rev. Geophys.).
24
© Crown copyright Met Office Efforts should probably be prioritised based on user requirements, but who are the users and what might their needs be? What next? 3. Satellite Validation Requirements? High quality near surface temperature measurements suitable as reference data for validating satellite SST retrievals and analyses Priorities? Observations made in the near surface ocean (<10m) with small, well characterised uncertainties and significant coverage in time and space over the satellite era, e.g. Argo, CTD Positions of Argo array floats that have delivered data in the last 30 days (taken from http://www.argo.ucsd.edu/index.html, 27/05/14)
25
© Crown copyright Met Office Summary As part of creating the Met Office Hadley Centre Integrated Ocean Database (HadIOD), and with a reanalysis motivation in mind, we have made an attempt to allocate bias adjustments and uncertainties to sub-surface profile observations. This was done largely at platform-type level. In HadIOD, uncertainty is decomposed into random measurement uncertainty and bias correction uncertainties. For sub-surface profile observations our knowledge of these uncertainties is perhaps more limited than for surface-only observations. Structural uncertainties can be explored by attaching multiple realisations of bias corrections and uncertainties to the observations. To refine our understanding of uncertainties will likely require effort from across the observations community, and should probably be driven by understanding user needs chris.atkinson@metoffice.gov.uk Assigning Uncertainties to Individual Observations: Experience from Building HadIOD
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.