1 Isaac Newton Workshop on Probabilistic Climate Prediction University of Exeter 20-23 Sep 2010 Professor David B. Stephenson Exeter Climate Systems Mathematics.

Slides:



Advertisements
Similar presentations
1 Uncertainty in rainfall-runoff simulations An introduction and review of different techniques M. Shafii, Dept. Of Hydrology, Feb
Advertisements

Measuring the performance of climate predictions Chris Ferro, Tom Fricker, David Stephenson Mathematics Research Institute University of Exeter, UK IMA.
Fair scores for ensemble forecasts Chris Ferro University of Exeter 13th EMS Annual Meeting and 11th ECAM (10 September 2013, Reading, UK)
Februar 2003 Workshop Kopenhagen1 Assessing the uncertainties in regional climate predictions of the 20 th and 21 th century Andreas Hense Meteorologisches.
Lwando Kondlo Supervisor: Prof. Chris Koen University of the Western Cape 12/3/2008 SKA SA Postgraduate Bursary Conference Estimation of the parameters.
This presentation can be downloaded at – This work is carried out within the SWITCH-ON.
Statistics 100 Lecture Set 7. Chapters 13 and 14 in this lecture set Please read these, you are responsible for all material Will be doing chapters
Theme D: Model Processes, Errors and Inadequacies Mat Collins, College of Engineering, Mathematics and Physical Sciences, University of Exeter and Met.
Bayesian methods for combining climate forecasts (*): Department of Meteorology, The University of Reading 1.Introduction 2.Conditioning and Bayes’ theorem.
June 17th 2003 Spruce VI1 On the use of statistics in complex weather and climate models Andreas Hense Meteorological Institute University Bonn.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Creating probability forecasts of binary events from ensemble predictions and prior information - A comparison of methods Cristina Primo Institute Pierre.
Introduction to Probability and Probabilistic Forecasting L i n k i n g S c i e n c e t o S o c i e t y Simon Mason International Research Institute for.
Statistical Postprocessing of Weather Parameters for a High-Resolution Limited-Area Model Ulrich Damrath Volker Renner Susanne Theis Andreas Hense.
Analysis of Extremes in Climate Science Francis Zwiers Climate Research Division, Environment Canada. Photo: F. Zwiers.
G.S. Karlovits, J.C. Adam, Washington State University 2010 AGU Fall Meeting, San Francisco, CA.
16 March 2011 | Peter Janssen & Arthur Petersen Model structure uncertainty A matter of (Bayesian) belief?
Extremes ● An extreme value is an unusually large – or small – magnitude. ● Extreme value analysis (EVA) has as objective to quantify the stochastic behavior.
Statistics, data, and deterministic models NRCSE.
Data assimilation Derek Karssenberg, Faculty of Geosciences, Utrecht University.
Linking probabilistic climate scenarios with downscaling methods for impact studies Dr Hayley Fowler School of Civil Engineering and Geosciences University.
Bayesian Analysis for Extreme Events Pao-Shin Chu and Xin Zhao Department of Meteorology School of Ocean & Earth Science & Technology University of Hawaii-
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Caio A. S. Coelho Supervisors: D. B. Stephenson, F. J. Doblas-Reyes (*) Thanks to CAG, S. Pezzulli and M. Balmaseda.
1. An Overview of the Data Analysis and Probability Standard for School Mathematics? 2.
Page 1© Crown copyright 2004 Development of probabilistic climate predictions for UKCIP08 David Sexton, James Murphy, Mat Collins, Geoff Jenkins, Glen.
Evaluating decadal hindcasts: why and how? Chris Ferro (University of Exeter) T. Fricker, F. Otto, D. Stephenson, E. Suckling CliMathNet Conference (3.
Page 1GMES - ENSEMBLES 2008 ENSEMBLES. Page 2GMES - ENSEMBLES 2008 The ENSEMBLES Project  Began 4 years ago, will end in December 2009  Supported by.
1 The EUROBRISA Project David Stephenson University of Exeter 1 st EUROBRISA workshop, Paraty, Brazil March 2008.
Nynke Hofstra and Mark New Oxford University Centre for the Environment Trends in extremes in the ENSEMBLES daily gridded observational datasets for Europe.
1 Grand Challenges in Modelling [Storm-Related] Extremes David Stephenson Exeter Climate Systems NCAR Research Colloquium, Statistical assessment of Extreme.
© Crown copyright Met Office Using a perturbed physics ensemble to make probabilistic climate projections for the UK Isaac Newton workshop, Exeter David.
1 Institute of Engineering Mechanics Leopold-Franzens University Innsbruck, Austria, EU H.J. Pradlwarter and G.I. Schuëller Confidence.
Bayesian inference review Objective –estimate unknown parameter  based on observations y. Result is given by probability distribution. Bayesian inference.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
Recent Advances in Climate Extremes Science AVOID 2 FCO-Roshydromet workshop, Moscow, 19 th March 2015 Simon Brown, Met Office Hadley Centre.
Caio A. S. Coelho, D. B. Stephenson, F. J. Doblas-Reyes (*) and M. Balmaseda (*) Department of Meteorology, University of Reading and ECMWF (*)
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Model dependence and an idea for post- processing multi-model ensembles Craig H. Bishop Naval Research Laboratory, Monterey, CA, USA Gab Abramowitz Climate.
Why it is good to be uncertain ? Martin Wattenbach, Pia Gottschalk, Markus Reichstein, Dario Papale, Jagadeesh Yeluripati, Astley Hastings, Marcel van.
Applications of optimal control and EnKF to Flow Simulation and Modeling Florida State University, February, 2005, Tallahassee, Florida The Maximum.
Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss European wind storms and reinsurance loss: New estimates.
Understanding hydrologic changes: application of the VIC model Vimal Mishra Assistant Professor Indian Institute of Technology (IIT), Gandhinagar
- 1 - Overall procedure of validation Calibration Validation Figure 12.4 Validation, calibration, and prediction (Oberkampf and Barone, 2004 ). Model accuracy.
Bayesian Travel Time Reliability
Timothy Aman, FCAS MAAA Managing Director, Guy Carpenter Miami Statistical Limitations of Catastrophe Models CAS Limited Attendance Seminar New York, NY.
WCRP Extremes Workshop Sept 2010 Detecting human influence on extreme daily temperature at regional scales Photo: F. Zwiers (Long-tailed Jaeger)
Montserrat Fuentes Statistics Department NCSU Research directions in climate change SAMSI workshop, September 14, 2009.
Chris Ferro Climate Analysis Group Department of Meteorology University of Reading Extremes in a Varied Climate 1.Significance of distributional changes.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
© Crown copyright 2007 Global changes in extreme daily temperature since 1950 using non-stationary extreme value analysis Simon Brown, John Caesar and.
1 Exeter Storm Risk Group David Stephenson, Renato Vitolo, Chris Ferro, Mark Holland Alef Sterk, Theo Economou, Alasdair Hunter, Phil Sansom Key areas.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Caio A. S. Coelho Centro de Previsão de Tempo e Estudos Climáticos (CPTEC) Instituto Nacional de Pesquisas Espaciais (INPE) 10 th International.
Caio A. S. Coelho, S. Pezzulli, M. Balmaseda (*), F. J. Doblas-Reyes (*) and D. B. Stephenson Forecast calibration and combination: A simple Bayesian approach.
Caio A. S. Coelho, S. Pezzulli, M. Balmaseda (*), F. J. Doblas-Reyes (*) and D. B. Stephenson Bayesian combination of ENSO forecasts Department of Meteorology,
Understanding and Improving Marine Air Temperatures David I. Berry and Elizabeth C. Kent National Oceanography Centre, Southampton
Model Comparison. Assessing alternative models We don’t ask “Is the model right or wrong?” We ask “Do the data support a model more than a competing model?”
Stats 242.3(02) Statistical Theory and Methodology.
On triangular norms, metric spaces and a general formulation of the discrete inverse problem or starting to think logically about uncertainty On triangular.
Modeling and Simulation CS 313
Modeling and Simulation CS 313
Ruth Doherty, Edinburgh University Adam Butler & Glenn Marion, BioSS
Michael Epstein, Ben Calderhead, Mark A. Girolami, Lucia G. Sivilotti 
Forecast Assimilation: A Unified Framework for the
forecasts of rare events
Measuring the performance of climate predictions
the performance of weather forecasts
Presentation transcript:

1 Isaac Newton Workshop on Probabilistic Climate Prediction University of Exeter Sep 2010 Professor David B. Stephenson Exeter Climate Systems Mathematics Research Institute

2 Aims of the workshop No universal agreement exists on what constitutes a reliable and robust framework for inferring and evaluating predictions of real-world climate. This workshop aims - to debate the strengths and weaknesses of existing frameworks for inferring and evaluating predictions of real-world climate -identify and formulate pressing and potentially solvable problems in the mathematics of probabilistic climate prediction. Workshop style: Interactive, constructive, stimulating. Overview talks on the main issues and existing methods will set the scene and then these will be followed by small thematic breakout working groups. Working groups will reconvene to share ideas.

3 How should we use multi-model simulations...

4... to make inference about future observables? Projections Observations Uncertainty around projections scenarios

5 Forecast Assimilation Data Assimilation Forecast Assimilation Stephenson, D.B., Coelho, C.A.S., Balmaseda, M. and Doblas-Reyes, F.J. (2005): Forecast Assimilation: A unified framework for the combination of multi-model weather and climate predictions, Tellus A, 57 (3), pp

6 The Multi-Model Ensemble A “fruit bowl of opportunity” {X 1,X 2,...,X m } Note: Not a random sample from one homogeneous population (and it does not include all possible fruit!)

7 What does reality look like? actual true climate Y – inferred from observations Z It could not have been drawn out of my fruit bowl! How can we infer properties of this from the fruit in the fruitbowl? An inconvenient truth

8 Smoothies (multi-model means) A smoothie is a weighted average of fruits. It is not an item of real fruit! (important information has been lost by averaging) Non-unique choice of weights for making smoothies.  We require modelling frameworks for obtaining samples of real fruit from the posterior distribution p(Y|X) (not smoothies E(X) or E(X|Y)).

9 Should we use everything in the fruit bowl? Should we select subsets? How should we weight the fruits? “All fruit are wrong, but some are tasty” - Granny Smith

10 Homogeous samples How to relate Y to X? Are the {X i } independent draws from a distribution centred on Y? Are the {X i } second-order exchangeable with each other and Y? How best to model model discrepancy Y-X i ?

11 Breakout themes Theme A: Frameworks for quantifying uncertainty (Chair: Jonty Rougier in LT1) What frameworks are available for quantifying uncertainty in climate predictions (e.g. Bayesian probability, maximum likelihood, interval probabilities, random/fuzzy, etc.) and what are their respective strengths and limitations? Theme B: Calibration of climate predictions (Chair: David Stephenson in room D) What grounds do we have for believing that our predictions are well-calibrated and how should we go about calibrating outputs from climate models (e.g. bias correction of extremes)? Theme C: Evaluation of climate predictions (Chair: Chris Ferro in room E) How should we go about evaluating probabilistic climate predictions? How can we best determine the skill and reliability of probabilistic climate predictions at various space and time scales? Theme D: Model processes and inadequacies (Chair: Mat Collins in room F) How should we represent and quantify known limitations in physical processes simulated by climate models? i.e. the ability to simulate long-lasting blocking events, the correct intensity of extreme storm events, etc. Theme E: Problem area to be decided (Chair: Richard Chandler in Margaret Room) There are many other pressing and interesting questions in probabilistic climate prediction that require statistical research. Each theme should have a rapporteur who will report back to the main sessions and record ideas on the wiki page (user=newton pass=apple). Participants can move around themes.

12 Theme B: Calibration strategies John Ho, Mat Collins, Simon Brown, Chris Ferro How to infer distribution of Y’ from distributions of Y, X and X’? 1. No calibration Assume Y’ and X’ have identical distributions (i.e. no model biases!) i.e. F Y’ = F X’ 2. Bias correction Assume Y’=B(X’) where B(.)=F Y -1 (F X (.)) 3. Change factor Assume Y’=C(Y) where C(.)=F X’ -1 (F X (.)) 4. Other e.g. Adjust parameters in parametric fits e.g. X Y Y’=? X’ Y’=B(X’) Y’=C(Y)

13 Example: daily summer temperatures in London Daily mean air temperatures Y=Observations from E- OBS gridded dataset (Haylock et al., 2008) X=HadRM3 standard run (25 km resolution) forced by HadCM3; SRES A1B scenario. n=30*120=3600 days Black line = sample mean Red line = 99th percentile

14 Probability density functions Black line = pdf of obs data Blue line = pdf of climate data Red line = pdf of climate data

15 Linear calibration (constant shape)  Two approaches give different future mean temperatures!

16 Change in 10-summer level from No calibration T g’ - T o Bias correctionChange factor  Substantial differences between different estimates!

17

18 Breakout themes Theme A: Frameworks for quantifying uncertainty (Chair: Jonty Rougier in LT1) What frameworks are available for quantifying uncertainty in climate predictions (e.g. Bayesian probability, maximum likelihood, interval probabilities, random/fuzzy, etc.) and what are their respective strengths and limitations? Theme B: Calibration of climate predictions (Chair: David Stephenson in room D) What grounds do we have for believing that our predictions are well-calibrated and how should we go about calibrating outputs from climate models (e.g. bias correction of extremes)? Theme C: Evaluation of climate predictions (Chair: Chris Ferro in room E) How should we go about evaluating probabilistic climate predictions? How can we best determine the skill and reliability of probabilistic climate predictions at various space and time scales? Theme D: Model processes and inadequacies (Chair: Mat Collins in room F) How should we represent and quantify known limitations in physical processes simulated by climate models? i.e. the ability to simulate long-lasting blocking events, the correct intensity of extreme storm events, etc. Theme E: Problem area to be decided (Chair: Richard Chandler in Margaret Room) There are many other pressing and interesting questions in probabilistic climate prediction that require statistical research. Each theme should have a rapporteur who will report back to the main sessions and record ideas on the wiki page (user=newton pass=apple). Participants can move around themes.

19 Ideas for Theme E? Theme E: Emergent problem area to be decided (Chair: Richard Chandler) There are many other pressing and interesting questions in probabilistic climate prediction that require statistical research. Some such questions include: What is the purpose of introducing stochastic components into climate simulators? How can we determine what data (i.e. model simulations, observed data, paleoclimate data) are required to inform the development of improved climate projections? How should we best account for shared sources of uncertainty across a multi- model ensemble? What formal role is there for simpler simulators? Do we believe the ergodic assumptions that are implicitly made in climate science? How best to visualise probability forecasts? Others???

20 Tuesday 21 September: Overview talks and small group brainstorming 09:30-10:15 “Outstanding problems in probabilistic prediction of climate” David Stephenson 10:15-11:00 “Model inadequacies and physical processes” Mat Collins 11:00-11:30 Morning coffee/tea 11:30-13:00 Small group breakout sessions on the 5 main themes 13:00-14:00 Photo and Buffet lunch 14:00-14:45 “Methodologies for probabilistic uncertainty assessment” Richard Chandler 14:45-15:30 “Probabilistic methodology used for UKCIP” David Sexton 15:30-16:00 Afternoon tea/coffee 16:00-16:15 "Probabilistic use of climate catastrophe multi-models" Gero Michel 16:15-17:30 Small group breakout sessions 17:30-19:00 Drinks reception in Holland Hall bar (kindly sponsored by Willis) 19:00- Participants go for dinner at restaurants in Exeter Wednesday 22 September: Overview talks and small group brainstorming 08:00-09:00 Breakfast in Holland Hall (for those staying there) Talks and discussions in the Queen’s building: 09:30-10:15 “Non-probabilistic frameworks” Arthur Dempster 10:15-11:00 “Probabilistic frameworks” Jonty Rougier 11:00-11:30 Morning coffee/tea 11:30-13:00 Small group breakout sessions on the 5 main themes 13:00-14:00 Buffet lunch 14:00-15:30 Small group breakout sessions 15:30-16:00 Afternoon tea/coffee 16:00-17:30 Small group breakout sessions 17:30-19:00 Drinks reception in Holland Hall bar (kindly sponsored by Willis) 19:00- Participants go for dinner at restaurants in Exeter Thursday 23 September: Plenary summary and departure 08:00-09:00 Breakfast in Holland Hall (for those staying there) Summary discussions in the Queen’s building 09:30-10:30 Summaries of breakout sessions (rapporteurs) 10:30-11:00 Final discussion and future plans 11:00-11:30 Morning coffee/tea