© Crown copyright Met Office Using a perturbed physics ensemble to make probabilistic climate projections for the UK Isaac Newton workshop, Exeter David Sexton Met Office Hadley Centre September 20th 2010
Three different emission scenarios Seven different timeframes 25km grid, 16 admin regions, 23 river-basins and 9 marine regions UKCP09: A 5-dimensional problem Uncertainty including information from models other than HadCM3 Variables and months This is what users requested
Why we cannot be certain… Internal variability (initial condition uncertainty) Modelling uncertainty –Parametric uncertainty (land/atmosphere and ocean perturbed physics ensembles) –Structural uncertainty (multimodel ensembles) –Systematic errors common to all current climate models Forcing uncertainty –Different emission scenarios –carbon cycle (perturbed physics ensembles) –aerosol forcing (perturbed physics ensembles)
Production of UKCP09 predictions Simple Climate Model Time-dependent PDF 25km PDF UKCP09 Equilibrium PPE 4 time-dependent Earth System PPEs (atmos, ocean, carbon, aerosol) Equilibrium PDF Observations Other models 25km regional climate model
Stage 1: Uncertainty in equilibrium response Simple Climate Model Time-dependent PDF 25km PDF UKCP09 Equilibrium PPE 4 time-dependent Earth System PPEs (atmos, ocean, carbon, aerosol) Equilibrium PDF Observations Other models 25km regional climate model
Perturbed physics ensemble There are plenty of different variants of the climate model (i.e. different values for model input parameters) that are as good if not better than the standard tuned version But their response can be different to the standard version Cast the net wide, explore parameter space with view to finding pockets of good quality parts of parameter space and see what that implies for uncertainty
Perturbed physics ensemble 280 equilibrium runs, 31 parameters Parameters varied within ranges elicited from experts
Probabilistic prediction of equilibrium response to double CO2
Stage 2: Time Scaling (Glen Harris and Penny Boorman) Simple Climate Model Time-dependent PDF 25km PDF UKCP09 Equilibrium PPE 4 time-dependent Earth System PPEs (atmos, ocean, carbon, aerosol) Equilibrium PDF Observations Other models 25km regional climate model
Ensembles for other Earth System components Use ocean, sulphur cycle, carbon cycle PPEs and multimodel ensembles to tune different configurations of the Simple Climate Model 17 members of Atmosphere Perturbed Physics Ensemble repeated with a full coupling between atmosphere and dynamic ocean
Making time-dependent PDFs Sample point in atmosphere parameter space Emulate equilibrium response in climate sensitivity and prediction variables and calculate weights Sample ocean, aerosol and carbon cycle configurations of Simple Climate Model Time scale the prediction variables Adjust the weight according to how well model variant reproduces large scale temperature trends over 20 th century And repeat sampling…
Plume for GCM grid box over Wales
Stage 3: Downscaling (Kate Brown) Simple Climate Model Time-dependent PDF 25km PDF UKCP09 Equilibrium PPE 4 time-dependent Earth System PPEs (atmos, ocean, carbon, aerosol) Equilibrium PDF Observations Other models 25km regional climate model
Dynamical downscaling For 11 of the 17 atmosphere fully coupled ocean- atmosphere runs, use 6-hourly boundary conditions to drive 25km regional climate model for
Adding information at 25km scale High resolution regional climate model projections are used to account for the local effects of coastlines, mountains, and other regional influences. They add skilful detail to large scale projections from global climate model projections, but also inherit errors from them.
Stage 1: Uncertainty in equilibrium response Simple Climate Model Time-dependent PDF 25km PDF UKCP09 Equilibrium PPE 4 time-dependent Earth System PPEs (atmos, ocean, carbon, aerosol) Equilibrium PDF Observations Other models 25km regional climate model
Bayesian prediction – Goldstein and Rougier 2004 Aim is to construct joint probability distribution p(X, m h, m f,y,o,d) of all uncertain objects in problem. Input parameters (X) Historical and future model output (m h,m f ) True climate (y h,y f ) Observations (o) Model imperfections (d) Bayes Linear assumption so all objects represented in terms of means and covariances
Best-input assumption (Goldstein and Rougier 2004) Model not perfect so there are processes in real system but not in our model that could alter model response by an uncertain amount. We assume that one choice of values for our model’s parameters, x*, is better than all others True climate Discrepancy d=0 for perfect model Model output of best choice of parameter values x*
Weighting different model variants But each combination has a prior probability of being x*, so build and emulator Use observations to weight towards higher quality parts of parameter space No verification or hindcasting possible so we are limited to this use of the observations Emulated distributions for 10 different samples of combination s of parameter values
Weighting different model variants Emulated distributions for 10 different samples of combination s of parameter values But each combination has a prior probability of being x*, so build and emulator Use observations to weight towards higher quality parts of parameter space No verification or hindcasting possible so we are limited to this use of the observations
Large scale patterns of climate variations The first of six eigenvectors of observed climate used in weighting. A small subset of climate variables are shown
Constraining parameters
Second way to constrain predictions with observations Some uncertainty about future related to uncertainty about past and is removed when values for real world are specified
Specifying the discrepancy Method does not capture systematic errors that are common to all state-of- the-art climate models
The largest discrepancy impact for UK temperature changes: Scotland in March Example of large shift in PDF due to mean discrepancy indicating a bias in HadCM3 relative to other models
Discrepancy Term: Snow Albedo Feedback in Scotland in March Black crosses: Perturbed physics ensembles, slab models Red asterisks: Multi model slab ensemble Black vertical lines: Observations (different data sets) Observations
But what if… Black crosses: Perturbed physics ensembles, slab models Red asterisks: Multi model slab ensemble Black vertical lines: Observations (different data sets) Observations Should this be captured by the adjustment term or should the discrepancy be a function of x?
Making PDFs for the real world Start with prior which comes from model output Weighting by large scale metrics plus adjustment –But what about local scale like control March Scotland temperature. ENSEMBLES show May Sweden temperature similar behaviour so maybe need a new metric to capture this behaviour Discrepancy – a direct link between model and real world Downscaling – statistical or dynamic Real world PDF Discrepancy adj emulated likelihood prior
PDFs for the real world (ii) Quantile matching –Piani et al (WATCH project. Submitted) Transfer function applied to model data in baseline period so that CDF of transfer(model data) = observed CDF –Li et al (2010). Correct future percentile by removing model bias in that percentile in baseline period. But what if you cross a threshold?
Crossing a threshold (Clark et al GRL 2010)
PDFs for the real world (iii) Model soil too dry, so longer tail in daily temperatures than observed. This tail does not change much under climate change because still dry Observed soil not too dry, but if climate change causes soil to dry below threshold, there will be a big increase in the upper tail of daily temperatures Li et al would remove a large baseline bias in the upper tail from future model CDF (whose tail is similar to baseline model CDF) Another perspective: Under climate change, model and real world will both be dry, as they are in the same “soil-regime” and future bias < baseline bias Same applies to March Scotland temperature though it is the model that crosses the threshold
PDFs for the real world (iv) Build physics into the bias correction Buser et al (2009) use interannual variability Seasonal forecasting –Clark and Deque – use analogues to calibrate bias correction in seasonal forecast –Ward and Navarra – SVD of joint vector of forecast/observations for several forecasts to pick out which leading order model patterns correspond to which leading order observed patterns
Summary IPCC Working group II scientists use “multiple lines of evidence” to help users make adaptation decisions UKCP09 is a transparent synthesis of climate model data from Met Office and outside, plus observations Statistics provides us with a nice way to frame the problem, generate an algorithm, makes sure we are not missing any terms, and gives us a language to discuss the problem A real challenge though is to develop the statistics to better represent complex behaviour that we understand physically e.g crossing a “threshold”.
Any questions?
Weighting Dots indicate: 280 values from perturbed physics ensemble 12 values from multimodel ensemble Observed value Using 6 metrics reduces risk of rewarding models for wrong reasons e.g. fortuitous compensation of errors
Second eigenvector of observed climate A small subset of climate variables are shown
Third eigenvector of observed climate A small subset of climate variables are shown
Comparing models with observations Use the Bayesian framework of Goldstein and Rougier (2004) “Posterior PDF = prior PDF x likelihood” Use six “large scale” metrics to define likelihood Skill of model is likelihood of model data given some observations V = obs uncertainty + emulator error + discrepancy
Testing robustness Projections inevitably depend on expert assumptions and choices However, sensitivities to some key choices can be tested Changes for Wales, 2080s relative to
Reducing different sources of uncertainty? Uncertainties in winter precipitation changes for the 2080s relative to , at a 25km box in SE England New information, methods, experimental design can reduce uncertainty so projections will change in future and decision makers need to consider this
Discrepancy Term: Snow Albedo Feedback in Scotland in March Black crosses: Perturbed physics ensembles, slab models Red asterisks: Multi model slab ensemble Black vertical lines: Observations (different data sets) Observations
Adjusting future temperatures Consider surface energy balance
Comparison of methods + Raw QUMP data UKCP09 method New energy balance based method
Interannual results v. 30-year mean results Predictions are for 30-year means, so should not be compared to annual climate anomalies. Summer % rainfall change: a) interannual over SE England from 17 runs b) time-dependent percentiles of 30-year mean at DEFRA
Effect of historical discrepancy on weighting Discrepancy included excluded Estimated from sample size of 50000
Discrepancy – a schematic of what it does Avoids contradictions from subsequent analyses when some observations have been allowed to constrain the problem too strongly.
UKCP09 aerosol forcing uncertainty Fig. 2.20, AR4, IPCC: total aerosol forcing in 2005, relative to Q (Wm -2 ), , aerosol + solar + volcanic + ozone Sample of UKCP A1B-GHG forcing x Different scales From IPCC Fourth assessment report Aerosol forcing is found to be inversely proportional to climate sensitivity and this, along with perturbations to sulphur cycle, implies a distribution of aerosol forcing uncertainty in UKCP09
Model imperfections in Bayesian prediction (Goldstein and Rougier 2004) Define discrepancy as a measure of the extent to which model imperfections could affect the response. Assumes there exists a best choice of parameter values Discrepancy is a variance and it measures how informative the climate model is. A perfect model has zero discrepancy. Discrepancy inflates the PDFs of the prediction variables Discrepancy makes it more difficult to discern a good quality model from a poor quality model and so avoids over-confidence in weighting out poor parts of parameter space But how to specify it?
Bayesian prediction – Goldstein and Rougier Aim is to construct joint probability distribution p(X, m h, m f,y,o,d) of all uncertain objects in problem. Input parameters (X) Historical and future model output (m h,m f ) True climate (y h,y f ) Observations (o) Model imperfections (d) Probability here is a measure of how strongly a given value of climate change is supported by the evidence (model projections, observations, expert judgements informed by understanding)
Weighting particularly effective if there exists a strong relationship between a historical climate variable and a parameter AND that parameter and a future climate variable. So weighting can still have a different effect on different prediction variables. Constraining predictions Observed value
We use 6 eigenvectors Weights come from likelihood function. Comparison of weight distributions varying the dimensionality of the likelihood function for Monte Carlo samples of 1 million points.