Assessment of Antarctic sea ice thickness in the ORA-IP

Slides:



Advertisements
Similar presentations
ECMWF long range forecast systems
Advertisements

Daniela Flocco, Daniel Feltham, David Schr  eder Centre for Polar Observation and Modelling University College London.
TOA radiative flux diurnal cycle variability Patrick Taylor NASA Langley Research Center Climate Science Branch NEWS PI Meeting.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
Outline Further Reading: Detailed Notes Posted on Class Web Sites Natural Environments: The Atmosphere GE 101 – Spring 2007 Boston University Myneni L29:
WATER VAPOR OBSERVATIONS AND PROCESSES Long Beach, 11. Feb The atmospheric moisture budget in the Arctic – introducing and applying a consistent.
Interannual and Regional Variability of Southern Ocean Snow on Sea Ice Thorsten Markus and Donald J. Cavalieri Goal: To investigate the regional and interannual.
Observers-Modelers Observations-Models Same struggle? François Massonnet Barrow workshop 29 Apr – 31 May 2015.
Topics: Statistics & Experimental Design The Human Visual System Color Science Light Sources: Radiometry/Photometry Geometric Optics Tone-transfer Function.
Dataset Development within the Surface Processes Group David I. Berry and Elizabeth C. Kent.
Projecting changes in climate and sea level Thomas Stocker Climate and Environmental Physics, Physics Institute, University of Bern Jonathan Gregory Walker.
MALE REPRODUCTIVE SUCCESS IN SOUTHERN ELEPHANT SEALS BEHAVIOURAL ESTIMATES AND GENETIC PATERNITY INTRODUCTION The southern elephant is the species with.
Lecture 16 Section 8.1 Objectives: Testing Statistical Hypotheses − Stating hypotheses statements − Type I and II errors − Conducting a hypothesis test.
Lecture 5 Model Evaluation. Elements of Model evaluation l Goodness of fit l Prediction Error l Bias l Outliers and patterns in residuals.
Evaluation of climate models, Attribution of climate change IPCC Chpts 7,8 and 12. John F B Mitchell Hadley Centre How well do models simulate present.
1 Motivation Motivation SST analysis products at NCDC SST analysis products at NCDC  Extended Reconstruction SST (ERSST) v.3b  Daily Optimum Interpolation.
1 Chapter 9 Hypothesis Testing. 2 Chapter Outline  Developing Null and Alternative Hypothesis  Type I and Type II Errors  Population Mean: Known 
The dynamic-thermodynamic sea ice module in the Bergen Climate Model Helge Drange and Mats Bentsen Nansen Environmental and Remote Sensing Center Bjerknes.
1 of 36 The EPA 7-Step DQO Process Step 6 - Specify Error Tolerances (60 minutes) (15 minute Morning Break) Presenter: Sebastian Tindall DQO Training Course.
Issues concerning the interpretation of statistical significance tests.
Feng Zhang and Aris Georgakakos School of Civil and Environmental Engineering, Georgia Institute of Technology Sample of Chart Subheading Goes Here Comparing.
1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh,
Investigations of Artifacts in the ISCCP Datasets William B. Rossow July 2006.
 Assumptions are an essential part of statistics and the process of building and testing models.  There are many different assumptions across the range.
1 of 31 The EPA 7-Step DQO Process Step 6 - Specify Error Tolerances 60 minutes (15 minute Morning Break) Presenter: Sebastian Tindall DQO Training Course.
The ENSEMBLES high- resolution gridded daily observed dataset Malcolm Haylock, Phil Jones, Climatic Research Unit, UK WP5.1 team: KNMI, MeteoSwiss, Oxford.
1 of 48 The EPA 7-Step DQO Process Step 6 - Specify Error Tolerances 3:00 PM - 3:30 PM (30 minutes) Presenter: Sebastian Tindall Day 2 DQO Training Course.
MODIS Atmosphere Products: The Importance of Record Quality and Length in Quantifying Trends and Correlations S. Platnick 1, N. Amarasinghe 1,2, P. Hubanks.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Heating of Earth's climate continues in the 2000s based upon satellite data and ocean observations Richard P. Allan 1, N. Loeb 2, J. Lyman 3, G. Johnson.
NASA, CGMS-44, 7 June 2016 Coordination Group for Meteorological Satellites - CGMS SURFACE PRESSURE MEASUREMENTS FROM THE ORBITING CARBON OBSERVATORY-2.
1 Changes in global net radiative imbalance Richard P. Allan, Chunlei Liu (University of Reading/NCAS Climate); Norman Loeb (NASA Langley); Matt.
A comparison of AMSR-E and AATSR SST time-series A preliminary investigation into the effects of using cloud-cleared SST data as opposed to all-sky SST.
What Are the Implications of Optical Closure Using Measurements from the Two Column Aerosol Project? J.D. Fast 1, L.K. Berg 1, E. Kassianov 1, D. Chand.
Seasonal Forecast of Antarctic Sea Ice
Skillful Arctic climate predictions
Measuring sea ice thickness using satellite radar altimetry
Status and Outlook Evaluating CFSR Air-Sea Heat, Freshwater, and Momentum Fluxes in the context of the Global Energy and Freshwater Budgets PI: Lisan.
Probabilistic Approaches to Gridding
National Aeronautics and Space Administration
Statistical Methods for Model Evaluation – Moving Beyond the Comparison of Matched Observations and Output for Model Grid Cells Kristen M. Foley1, Jenise.
Constraining snow albedo feedback with the present-day seasonal cycle
Diagnosing and quantifying uncertainties of
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Instrumental Surface Temperature Record
Evaluation of Polar Regions in Ocean Reanalyses Status of the ORA-IP and EU-COST EOS projects in the polar oceans Keith Haines , University of Reading.
Evaluation of sea ice thickness reproduction in AOMIP models
Radiative biases in Met Office global NWP model
Hypothesis Testing: Hypotheses
Global hydrological forcing: current understanding
Panel: Bill Large, Bob Weller, Tim Liu, Huug Van den Dool, Glenn White
Y. Xue1, C. Wen1, X. Yang2 , D. Behringer1, A. Kumar1,
Beaufort Sea Polynya 2006 – Martin Sharp
Geology Geomath Chapter 7 - Statistics tom.h.wilson
Andy Wood and Dennis P. Lettenmaier
WP3.10 : Cross-assessment of CCI-ECVs over the Mediterranean domain
Development of an advanced ensemble-based ocean data assimilation approach for ocean and coupled reanalyses Eric de Boisséson, Hao Zuo, Magdalena Balmaseda.
Surface Fluxes and Model Error An introduction
NOAA Objective Sea Surface Salinity Analysis P. Xie, Y. Xue, and A
Questions for consideration
Chapter 13: Inferences about Comparing Two Populations Lecture 7a
WP3.10 : Cross-assessment of CCI-ECVs over the Mediterranean domain
GloSea4: the Met Office Seasonal Forecasting System
Understanding and constraining snow albedo feedback
PROVIDING THE UPPER-AIR DATA RELEVANT TO STUDIES OF THE NORTHERN POLAR CLIMATE CHANGES Alexander M. Sterin (Russian Research Institute for Hydrometeorological.
UCLA Department of Atmospheric and Oceanic Sciences
Simulation Berlin Chen
Propagation of Error Berlin Chen
CURRENT Energy Budget Changes
Presentation transcript:

Assessment of Antarctic sea ice thickness in the ORA-IP Barcelona, June 2016 Assessment of Antarctic sea ice thickness in the ORA-IP Contribution to Evaluation of Ocean Syntheses action in the Polar Regions François Massonnet

Antarctic sea ice thickness at a glance Among all essential climate variables, Antarctic sea ice thickness (AA-SIT) is probably one with poorest coverage. AA-SIT is the product of a subtle balance between large atmospheric and oceanic heat fluxes and is almost entirely seasonal. The Antarctic pack is essentially divergent and subject to very powerful negative feedbacks; ice is thin (~1 m on avg). Recent satellite technology has allowed SIT retrievals but large uncertainties remain due to substantial snow load. The proportion of deformed, and therefore locally thick Antarctic sea ice, is probably more deformed than previously thought (Williams et al., Nat. Geosci. 2015) It is important to assess to what extent current reanalyses simulate realistically (or not) Antarctic sea ice thickness. If they do, these reanalyses can help to estimate the mass balance of sea ice (and perhaps more) in the Southern Hemisphere.

ASPeCt thickness: heterogeneous and subject to thin bias ASPeCt: “Antarctic Sea Ice Processes and Climate” A unique data set: 1981-2005, all sectors of Antarctica 81 voyages + 2 helicopter flights nearly everywhere Over 23,000 individual measurements of sea ice thickness Level ice thickness, weighted for open water, is considered here (equal to sea ice volume divided by area of region of sampling, including open water, excluding deformed ice). Data is subject to large uncertainties (and error statistics are not provided) Representativity error: data were collected during localized expeditions Systematic bias: towards thin ice (ice-breaker-based measurements) Measurement error: “The thickness is estimated on the level parts of floes when they are broken and turned sideways along the hull of the ship” (Worby et al., JGR, 2008). In addition the snow/ice interface is difficult to localize visually from a ship. A careful evaluation process has to be proposed More info on ASPeCt: Worby et al., JGR, 2008

Rebinning ASPeCt data All 23,391 daily ASPeCt measurements (1981-2005) were first binned into 1°x1° grid boxes, for each month of each year. The mean, min, max, and number of data for each bin were retained. The number of daily observations follows a power law: That is, there are less than three measurements available for 50% of the cases, meaning that any obs-model comparison must account for possible sampling issues. By comparison, ORA-IP reanalyses provide the monthly-mean sea ice thickness over 1°x1° boxes.

Are the ORA-IP compatible with the ASPeCt data? For each grid box and each month, we want to verify whether the collected ASPeCt samples are compatible (statistically speaking) with the mean provided by some ORA-IP reanalysis. When only one, two or three observations are available (~50% of the cases), no evaluation is conducted: sample size is just too small and nothing robust can be concluded since nothing is known about the variance of the SIT distribution. When four or more observations, the ORA-IP is deemed successful it its mean value lies within the range of ASPeCt data. Assuming symmetrical PDF, the probability for Type-I error is 12.5% (1 / 24 – 1) Note: the test has limitations! ORA-IP is compatible with ASPeCT The test is positive ORA-IP mean value, assuming it is correct PDF of true thickness BUT ORA-IP is compatible with ASPeCt The test is positive Five ASPeCt measurements Or, stated differently: a negative test indicates mismatch but a positive test does not necessarily indicate a match.

Are the ORA-IP compatible with the ASPeCt data? For each month of each year, and each 1°x1° grid box, If less than 3 observations are available during that month and at that location, no assessment is conducted Else, « 1 » is assigned to that grid box for that month if the reanalyzed SIT is within the ASPeCt range (« 0 » otherwise) The number of « succeed » and « fail » is cumulated Reanalysis value: succeeds (within range) Example July 1995 1° 1° 1 Ice thickness ASPeCt measurements Reanalysis value: fails (outside range) Reanalysis value: undefined (not enough ASPeCt data) Ice thickness ASPeCt measurements ASPeCt measurements Ice thickness

Compatibility index # successes Compatibility index = # successes + # fails Here: CI = 3 / (3 + 4) = 0.43

Ensemble of reanalyses considered Data were downloaded from the ORA-IP FTP: ftp://ftp.icdc.zmaw.de/ora_ip/ Thanks to those who organized this server and the datasets! The evaluation is conducted on 1993-2005 (longest common period between reanalyses and ASPeCt data). Label Institution Compatibility index Mean abs error (cm) GloSea5 UK Met Office 0.44 13 GECCO2 U. Hamburg 0.28 22 ECDA GFDL 0.48 GLORYSv1 MERCATOR 0.42 GLORYSv3 0.43 C-GLORS CMCC 0.49 ECCO NASA 12 Given the heterogeneity of the ASPeCt data set, it is difficult to make absolute statements. Nevertheless, none of the reanalysis simulates more than 50% of the time a SIT value that lies within the ASPeCt range. The ORA-IPs cannot be considered consistent with that dataset (Type-I probability of error: 12.5%)

For further information please contact Thank you! For further information please contact francois.massonnet@bsc.es