The use of multi-model ensembles in climate projections: on uncertainty characterization and quantification Claudia Tebaldi Climate Central Currently visiting.

Slides:



Advertisements
Similar presentations
CMIP5: Overview of the Coupled Model Intercomparison Project Phase 5
Advertisements

Probability Distributions CSLU 2850.Lo1 Spring 2008 Cameron McInally Fordham University May contain work from the Creative Commons.
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
Scaling Laws, Scale Invariance, and Climate Prediction
1 Introduction to Inference Confidence Intervals William P. Wattles, Ph.D. Psychology 302.
Correlation and regression
1 Detection and Analysis of Impulse Point Sequences on Correlated Disturbance Phone G. Filaretov, A. Avshalumov Moscow Power Engineering Institute, Moscow.
Introduction to Probability and Probabilistic Forecasting L i n k i n g S c i e n c e t o S o c i e t y Simon Mason International Research Institute for.
© Crown copyright Met Office Regional/local climate projections: present ability and future plans Research funded by Richard Jones: WCRP workshop on regional.
A Bayesian hierarchical modeling approach to reconstructing past climates David Hirst Norwegian Computing Center.
Regression Analysis. Unscheduled Maintenance Issue: l 36 flight squadrons l Each experiences unscheduled maintenance actions (UMAs) l UMAs costs $1000.
16 March 2011 | Peter Janssen & Arthur Petersen Model structure uncertainty A matter of (Bayesian) belief?
Uncertainty and Climate Change Dealing with uncertainty in climate change impacts Daniel J. Vimont Atmospheric and Oceanic Sciences Department Center for.
Simple Linear Regression
Federal Departement of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Present-day interannual variability of surface climate.
Outline Further Reading: Detailed Notes Posted on Class Web Sites Natural Environments: The Atmosphere GG 101 – Spring 2005 Boston University Myneni L31:
2008 Chingchun 1 Bootstrap Chingchun Huang ( 黃敬群 ) Vision Lab, NCTU.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Application of Geostatistical Inverse Modeling for Data-driven Atmospheric Trace Gas Flux Estimation Anna M. Michalak UCAR VSP Visiting Scientist NOAA.
Lecture II-2: Probability Review
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Gaussian process modelling
1 Least squares procedure Inference for least squares lines Simple Linear Regression.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
A Neural Network MonteCarlo approach to nucleon Form Factors parametrization Paris, ° CLAS12 Europen Workshop In collaboration with: A. Bacchetta.
1 Institute of Engineering Mechanics Leopold-Franzens University Innsbruck, Austria, EU H.J. Pradlwarter and G.I. Schuëller Confidence.
Outline Further Reading: Detailed Notes Posted on Class Web Sites Natural Environments: The Atmosphere GE 101 – Spring 2007 Boston University Myneni L30:
Towards determining ‘reliable’ 21st century precipitation and temperature change signal from IPCC CMIP3 climate simulations Abha Sood Brett Mullan, Stephen.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss High-resolution data assimilation in COSMO: Status and.
1 G Lect 8b G Lecture 8b Correlation: quantifying linear association between random variables Example: Okazaki’s inferences from a survey.
Research Process Parts of the research study Parts of the research study Aim: purpose of the study Aim: purpose of the study Target population: group whose.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Model dependence and an idea for post- processing multi-model ensembles Craig H. Bishop Naval Research Laboratory, Monterey, CA, USA Gab Abramowitz Climate.
Managerial Economics Demand Estimation & Forecasting.
Inferential Statistics A Closer Look. Analyze Phase2 Nature of Inference in·fer·ence (n.) “The act or process of deriving logical conclusions from premises.
(Mt/Ag/EnSc/EnSt 404/504 - Global Change) Climate Models (from IPCC WG-I, Chapter 10) Projected Future Changes Primary Source: IPCC WG-I Chapter 10 - Global.
Evaluation of climate models, Attribution of climate change IPCC Chpts 7,8 and 12. John F B Mitchell Hadley Centre How well do models simulate present.
Statistical approach Statistical post-processing of LPJ output Analyse trends in global annual mean NPP based on outputs from 19 runs of the LPJ model.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
- 1 - Overall procedure of validation Calibration Validation Figure 12.4 Validation, calibration, and prediction (Oberkampf and Barone, 2004 ). Model accuracy.
- 1 - Calibration with discrepancy Major references –Calibration lecture is not in the book. –Kennedy, Marc C., and Anthony O'Hagan. "Bayesian calibration.
BME 353 – BIOMEDICAL MEASUREMENTS AND INSTRUMENTATION MEASUREMENT PRINCIPLES.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
© Crown copyright Met Office Working with climate model ensembles PRECIS workshop, MMD, KL, November 2012.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
1 Simulation Scenarios. 2 Computer Based Experiments Systematically planning and conducting scientific studies that change experimental variables together.
Univariate Gaussian Case (Cont.)
WCC-3, Geneva, 31 Aug-4 Sep 2009 Advancing Climate Prediction Science – Decadal Prediction Mojib Latif Leibniz Institute of Marine Sciences, Kiel University,
Chapter 13 Understanding research results: statistical inference.
- 1 - Preliminaries Multivariate normal model (section 3.6, Gelman) –For a multi-parameter vector y, multivariate normal distribution is where  is covariance.
Using the past to constrain the future: how the palaeorecord can improve estimates of global warming 大氣所碩一 闕珮羽 Tamsin L. Edwards.
Approach We use CMIP5 simulations and specifically designed NCAR/DOE CESM1-CAM5 experiments to determine differential impacts at the grid-box levels (in.
Demand Management and Forecasting Chapter 11 Portions Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Chapter 9 Sampling Distributions 9.1 Sampling Distributions.
Schematic framework of anthropogenic climate change drivers, impacts and responses to climate change, and their linkages (IPCC, 2007).
1/39 Seasonal Prediction of Asian Monsoon: Predictability Issues and Limitations Arun Kumar Climate Prediction Center
Exposure Prediction and Measurement Error in Air Pollution and Health Studies Lianne Sheppard Adam A. Szpiro, Sun-Young Kim University of Washington CMAS.
Estimating standard error using bootstrap
Inference Concerning a Proportion
Introduction to Instrumentation Engineering
Econ 3790: Business and Economics Statistics
LECTURE 09: BAYESIAN LEARNING
Globale Mitteltemperatur
Globale Mitteltemperatur
Measuring the performance of climate predictions
Globale Mitteltemperatur
Presentation transcript:

The use of multi-model ensembles in climate projections: on uncertainty characterization and quantification Claudia Tebaldi Climate Central Currently visiting the Department of Global Ecology Carnegie Institution, Stanford 02/19/08 Seminar at JPL Reto Knutti (ETH, Zurich), Richard L. Smith (UNC, Chapel Hill)

Outline Multi-Model Ensembles and future climate change projections What are the main uncertainties in future projections, even given a specific scenario What MMEs can characterize and what they cannot Main issues, challenges in analyzing MMEs Value of MMEs An example of statistical analysis of MME projections: a joint model for temperature and precipitation change Summary and promising future directions

This unprecedented collection of recent model output is officially known as the "WCRP CMIP3 multi-model dataset." It is meant to serve IPCC's Working Group 1, which focuses on the physical climate system -- atmosphere, land surface, ocean and sea ice -- and the choice of variables archived at the PCMDI reflects this focus. A more comprehensive set of output for a given model may be available from the modeling center that produced it. With the consent of participating climate modelling groups, the WGCM has declared the CMIP3 multi-model dataset open and free for non-commercial purposes. After registering and agreeing to the "terms of use," anyone can now obtain model output via the ESG data portal, ftp, or the OPeNDAP server. As of January 2007, over 35 terabytes of data were in the archive and over 337 terabytes of data had been downloaded among the more than 1200 registered users. Over 250 journal articles, based at least in part on the dataset, have been published or have been accepted for peer-reviewed publication.

The way each model discretize its domain, the processes that it chooses to represent explicitly, those that it needs to parameterize (and the value for those parameters*) make the difference. Multi-Model Ensembles get at this structural uncertainty. *Perturbed Physics Ensembles deal with the parameter uncertainty within a single model.

About 20 state-of-the-art GCMs When it comes to future projections, Each says something different. Sometime only in absolute value, sometime in sign too! Agreement becomes harder to get the finer the spatial and temporal scale.

Shading is one inter-model standard deviation At global scales it is mostly a matter of absolute values

Stippling means 90% or more of models agree in the sign of change At regional scales it may be a matter of sign!

IPCC-AR4 WG1 Chapter 10 and 11, Global and Regional Future Projections: Most of the projected changes are shown as multi-model means. Inter-model standard deviation/range is often used as a measure of the uncertainty What are we looking at? What uncertainties are characterized? What are left out? Why means? Why standard deviations?

What kind of sample is the CMIP3 ensemble? A collection of best guesses Arguably mutually dependent (models share components) Not random, but not systematic either

MME mean is often better than a single model

Models are not independent Histogram of correlation coefficients of all possible pairs of model bias (simulated minus observed mean temperature )

Models are not distributed around the truth Less than half of the temperature errors vanish for an average of an infinite number of models of the same quality Average of N models Average of best N models 1/sqrt(N)

Models do not sample a wide range of uncertainty IPCC-AR4 wg1 Ch10 Climate sensitivity of CMIP3 models is green curve

Ensemble means and ensemble ranges are easy to interpret, but they are not justified by the nature of the sample Likely, the uncertainty is larger than what is represented by the ensemble of best guesses The models in the ensemble have systematic and common errors

Not all models are created equal Can we use model performance in replicating observed climate to weigh a model more or less? Seems like a good idea, except How do we account for model tuning? Is there correlation between past and future performance? What metrics of performance shall we use?

Model tuning

Is performance correlated to future projections?

Is model performance on the mean state indicative of the ability to simulate (future) trends? Ability to simulate observed pattern of mean climate Ability to simulate observed pattern of warming trend R = 0.27 R = -0.21

 Models continue to improve on present day climatology, but uncertainty in projections is not decreasing.  We may be looking at the wrong thing, i.e. climatology provides no strong constraint on projections.  We cannot verify our projections, but only test models indirectly.

Even after all these caveats, There is value in multi-model ensembles They provide the only means of exploring/quantifying structural uncertainties besides initial, boundary condition and parametric uncertainties. Ensemble average has better skill than single models’ output, esp. when considering more than one diagnostic. There is no way around GCMs when we are concerned with regional projections.

So, caveats notwithstanding, a plug for statistical modeling of climate model output Given these rich datasets, and the concurrent development of single-model sensitivity analysis (Perturbed Physics Experiments) Given all the work being done in validating model output Given the emerging awareness from the modeling communities of the value of experimental design It is hard to keep your hands off this problem as a statistician (esp.a Bayesian statistician) Maybe the trick is not to fool ourselves by blurring the line between the real system and the modeled

Probabilistic projections of joint temperature and precipitation change Main assumptions: the true climate signal time series (both temperature and precipitation) is a piecewise linear trend with an elbow at 2000, to account for the possibility that future trends will be different from current trends; superimposed to the piecewise linear trends is a bivariate gaussian noise with a full covariance matrix, which introduces correlation between temperature and precipitation; observed decadal averages provide a good estimate of the current series and their correlation, and of their uncertainty; GCMs may have systematic additive bias, assumed constant along the length of the simulation; after the bias in each GCM simulation is identified there remains variability around the true climate signal that is model specific.

We use uninformative priors on all the unknown parameters in the statistical model. We derive a joint posterior distribution for all of the parameters, In particular for and a posterior predictive distribution for a new model’s projection.

This analysis Can quantify systematic biases and models’ reliability Can provide a probabilistic “best guess” for the climate signal as a weighted mean between observations and models’ central tendency Can provide a predictive distribution for a “new model” It is an informed synthesis of the data You can accept it as such if you agree with its assumptions (likelihood of the data, priors on the unknown parameters) What does it tell us about the real world?

Conclusions Combining multi-model ensembles can help quantify uncertainties in future climate projections, exploring and comparing models’ structural characteristics. The non-systematic nature of the sampling, the in-breeding among the population of models the complexities of choosing and drawing inference from diagnostics, the impossibility of verification present unique challenges in the formulation of a rigorous statistical approach at quantifying these uncertainties.

Conclusions Things are to become even more interesting, with the introduction of ever more complex models and the consequent increasing heterogeneity of these ensembles, the increasing demands on computing resources and the consequent trade-offs of resolution vs scenarios vs ensemble size. The way forward: coordinated experiments, MMEs plus PPEs, process-based evaluation translated into metrics, representation of model dependencies, quantification of common biases. All along, transparency and two way communication between scientists and users, stake holders, policy makers.