Download presentation
Presentation is loading. Please wait.
1
University of Oxford Quantifying and communicating the robustness of estimates of uncertainty in climate predictions Implications for uncertainty language in the IPCC Fifth Assessment Myles Allen Department of Physics, University of Oxford
2
University of Oxford How uncertain are our estimates of uncertainty? Is this a real question? Shouldn’t a true estimate of uncertainty allow for our confidence in the tools used to produce it? No. Compare these two statements: 1.There is a 50/50 chance this roll of the dice will come up 4, 5 or 6 (aleatoric uncertainty). 2.Based on the reputation of this casino, there is a 50/50 chance they are using loaded dice (epistemic uncertainty). On a “betting odds” or “degree of belief” interpretation, 50/50 means the same thing in both. So you do not need to distinguish uncertainty in outcome from uncertainty in tools… …but only if you only plan to make a single forecast.
3
University of Oxford Communicating uncertainty in AR4 part 1: Evidence and degree of consensus
4
University of Oxford Communicating uncertainty in AR4 part 2: Judgement as to the correctness of a statement
5
University of Oxford Communicating uncertainty in AR4 part 3: Probabilistic assessment of defined outcome
6
University of Oxford Some examples from the IPCC 4 th Assessment (thanks to Mastrandrea and Schneider) AR4 WG III SPM: “With current... policies and … practices, global GHG emissions will continue to grow over the next few decades (high agreement, much evidence).” AR4 WG II SPM: “Coasts are projected to be exposed to increasing risks, including coastal erosion, due to climate change and sea-level rise (very high confidence).” AR4 WG I SPM: “It is very likely that hot extremes, heat waves and heavy precipitation events will continue to become more frequent.” And most ambitiously… AR4 WGII SPM: “Approximately 20-30% of plant and animal species assessed so far are likely to be at increased risk of extinction if increases in global average temperature exceed 1.5-2.5°C (medium confidence).”
7
University of Oxford The problem Likelihood assessments were “based on quantitative analysis or an elicitation of expert views.” So what is the difference between an expert judgement made with “very high confidence” and a statement something is “very likely” based on expert views (both mean 9-in-10 odds)? The workaround: –WG I used likelihood language. –WG II used confidence language. –WG III used evidence/agreement language. …which rather defeated the object of the exercise.
8
University of Oxford Do we need multiple scales? Compare these two statements: 1.Warming over 1990-2100 is very unlikely to be <1 o C. 2.Sea level rise over 1990-2100 is very unlikely to be >2m. Whatever the numbers, we currently have less confidence in the robustness of our max-SLR statement than in our minimum-warming statement. Specifically, we are less confident that another author team, confronted with the same evidence, would have arrived at the same estimate of odds. Need to distinguish robust uncertainties in decadal temperature forecasts from less robust uncertainties in centennial ice-sheet forecasts.
9
University of Oxford A robust example: using observations to constrain global temperature forecasts Detection and Attribution - the poor man’s ensemble: –Take a single model & small initial condition ensembles. –Scale the responses to different forcings up and down (including negative values) to generate a pseudo-ensemble. –Weight by goodness-of-fit to observations, scaled by model- data discrepancy expected from internal variability. –Repeat with other models and look for consistency of results (models interchangeable, not independent). Apply best-fit scaling factors to any output of the model that is likely to scale with past response to external forcing. “Extrapolating fractional errors” approach –including extrapolating uncertainties in fractional errors.
10
University of Oxford Schematic use of detection and attribution to constrain a climate forecast Weaver and Zwiers (2000)
11
University of Oxford The first observationally-constrained probabilistic climate forecast? Global temperature response to greenhouse gases and aerosols Solid: climate model simulation Dashed: recalibrated prediction using data to August 1996 (Allen et al, 2000)
12
University of Oxford The article in question
13
University of Oxford There was a time when people took 14-year climate forecasts seriously
14
University of Oxford The first observationally-constrained probabilistic climate forecast? Global temperature response to greenhouse gases and aerosols Solid: climate model simulation Dashed: recalibrated prediction using data to August 1996 (Allen et al, 2000) 14 years
15
University of Oxford The first observationally-constrained probabilistic climate forecast? Global temperature response to greenhouse gases and aerosols Solid: climate model simulation Dashed: recalibrated prediction using HadISST & CRUTEM data to August 1996 (Allen et al, 2000) Forecast verification 01/01/00 to 31/12/09
16
University of Oxford Forecast anthropogenic warming relative to 1986-96 mean (no initial-condition uncertainty)
17
University of Oxford Another application of detection and attribution to decadal forecasting Lee et al (2005)
18
University of Oxford And the verification Lee et al (2005)
19
University of Oxford Quantifying and communicating robustness of estimates of uncertainty Intuitively, we expect observationally-constrained decadal forecasts of large-scale anthropogenic warming to be relatively robust – why? –Estimates are limited by irreducible uncertainties, such as climate variability on time-scales of anthropogenic signal. –Contrasting methods give similar results: ranges in Lee et al (Bayesian) similar to Allen et al (frequentist). –Expert judgement plays only a second-order role. IPCC also must give more subjective (but honest) estimates of uncertainty on harder questions. Uncertainty language should allow this distinction.
20
University of Oxford A proposal – from one who wasn’t at the meeting Use confidence scale to express confidence in expert judgements. Still calibrated: e.g. “very high confidence” implies 9 out of 10 similarly-qualified groups of experts would agree… so confidence = “hypothetical consensus”. No need for standard evidence/consensus language. Use likelihood scale to express probability that a fact is true or an event will happen: likelihoods are informed by, not expressions of, expert judgement. Default is that all statements are high confidence (8 out of 10 would agree), whether they are likelihoods or not: only use “very high” or “medium” confidence qualifiers to distinguish exceptional cases.
21
University of Oxford I’m certainly not the first one to argue for this… Manning, 2006
22
University of Oxford …but I hope to be the last. Likelihood/Probability Confidence/Consensus X X
23
University of Oxford What do you think the IPCC should do? (to remind you, this was my proposal) Use confidence scale to express confidence in expert judgements. Still calibrated: e.g. “very high confidence” implies 9 out of 10 similarly-qualified groups of experts would agree… so confidence = “hypothetical consensus”. No need for standard evidence/consensus language. Use likelihood scale to express probability that a fact is true or an event will happen: likelihoods are informed by, not expressions of, expert judgement. Default is that all statements are high confidence (8 out of 10 would agree), whether they are likelihoods or not: only use “very high” or “medium” confidence qualifiers to distinguish exceptional cases.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.