Presentation is loading. Please wait.

Presentation is loading. Please wait.

Climate Predictability Tool (CPT)

Similar presentations


Presentation on theme: "Climate Predictability Tool (CPT)"— Presentation transcript:

1 Climate Predictability Tool (CPT)
Climate Predictability Tool (CPT) is an easy-to-use Windows-based software package for making downscaled seasonal climate forecasts. It builds models to forecast a variable of interest. It then provides hindcast skill results using several verification measures. Here we describe some of these measures.

2 Verification of climate forecasts
How is forecast skill or accuracy measured? Several measures are commonly used.

3 Heidke Skill Score (for deterministic categorical forecasts)
Probability forecasts for 3 tercile-based categories can be scored as if scored as if they are simple forecasts for category having the highest probability. Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations in southern Brazil. Suppose forecast is defined by tercile-based category having highest probability. Suppose for all 15 stations, “above” is forecast with highest probability, and that observations were above normal for 12 stations, and near normal for 3 stations. Then Heidke score is: 100 X (12 – 15/3) / (15 – 15/3) 100 X / 10 = Note that the probabilities given in the forecasts did not matter, only which category had highest probability. This is a big weakness.

4 Correlation: Measuring the strength of
Linear relationship between two variables --For diagnosis and understanding --For predictions of something in future (e.g. climate from SST patterns, or health situations from current rainfall observations)

5 Correlation between two variables
Pearson product-moment correlation Correlation is a systematic relationship between x and y: When one goes up, the other tends to go up also, or may tend to go down. Need corresponding pairs of cases of x, y. “Perfect” positive correlation is +1 “Perfect” negative correlation is –1 No correlation (x and y completely unrelated) is 0 Correlation can be anywhere between –1 and +1. A relationship between x and y may or may not be causal – if not, x and y may be under control of some third variable. Correlation can be estimated visually by looking at a scatterplot of dots on an x vs. y graph.

6 zx zy Covariance and correlation between two variables x itself
x vs. y Standard Deviation = zx zy vs The above formula defines the Pearson product-moment correlation

7 | | | o | | o o | | o o o | | o | | o o | Y| o o o o | | o | | o o | | o o | | o | | o | | o | |_______________________________________________| X correlation = 0.8

8 | o | | o | | o o o | | o o o o | | o o o |
| o o | Y| o o o o | | o o o | | o o o | | o o | | o | | o o o | | o | |_______________________________________________| X correlation = 0.55

9 | o | | | Y | | | | | | | | | o | |o o o | |oooooo o o | |oooooo_o_________________________________|
X correlation = 0.87 (due to one outlier in upper right) If domination by one case is not desired, can use the Spearman rank correlation (correlation among ranks instead of actual values).

10 | o x | | o x | Y | o x | | x o | | x o | | o x | | x o | | x o cor = 0 | | x o cor = 1 | |_______________________________________| X correlation for all 18 points = correlation squared = 0.5 When points having a perfect correlation are mixed with an equal number of points having no correlation, and the two sets have same mean and variance for X and Y, correlation is Correlation squared (amount of variance predicted, or accounted for) is 0.5.

11 | | | | Y | | | | | o | | o o | | o o | | o o | | o o | |______o_______________________o____|
X correlation = 0 but there is a strong nonlinear relationship The Pearson correlation only detects linear relationships

12 Simple Linear Regression
The use of linear correlation for prediction: Simple Linear Regression (“simple” implies just one predictor; if more than one, called Multiple Linear Regression) Determination of a regression line to fit points on the x vs. y scatterplot, so that if given a value of x, a “best prediction” can be made for y.

13 Spearman rank correlation
Rank correlation is the Pearson correlation between the ranks of X vs. the ranks of Y, treating ranks as numbers. Rank correlation measures the strength of monotonic relationship between two variables. Rank correlation defuses outliers by not honoring original intervals between adjacent ranks. Adjacent ranks simply differ by 1. Simpler formula for rank correlation for small samples: If difference in rank for a given case is D, Spearman cor = If ranks identical for all cases, all D are zero and cor = 1. An example of the use of this formula is given in next slide.

14 Spearman rank correlation
Rank correlation is simply the correlation between the ranks of X vs. the ranks of Y, treating ranks as numbers. When there are outliers, or when the X and/or Y data are very much non-normal, the Spearman rank correlation should be computed in addition to the standard correlation. Example of conversion to ranks for X or for Y: Original numbers: Corresponding ranks: can also be Note in above example that the difference between 189 and 21 is treated as the same as that between 9 and 7.

15 Root-mean-Square Skill Score: RMSSS for continuous deterministic
forecasts RMSSS is defined as: where: RMSEf = root mean square error of forecasts, and RMSEs = root mean square error of standard used as no-skill baseline. Both persistence and climatology can be used as baseline. Persistence, for a given parameter, is the persisted anomaly from the forecast period immediately prior to the LRF period being verified. For example, for seasonal forecasts, persistence is the seasonal anomaly from the season period prior to the season being verified. Climatology is equivalent to persisting an anomaly of zero. RMSf =

16 RMSf = where: i stands for a particular location (grid point or station). fi = forecasted anomaly at location i Oi = observed or analyzed anomaly at location i. Wi = weight at grid point i, when verification is done on a grid, set by Wi = cos(latitude) N = total number of grid points or stations where verification is carried. RMSSS is given as a percentage, while RMS scores for f and for s are given in the same units as the verified parameter.

17 The RMS and the RMSSS are made larger by three
main factors: (1) The mean bias (2) The conditional bias (3) The correlation between forecast and obs It is easy to correct for (1) using a hindcast history. This will improve the score. In some cases (2) can also be removed, or at least decreased, and this will improve the RMS and the RMSSS farther. Improving (1) and (2) does not improve (3). It is most difficult to increase (3). If the tool is a dynamical model, a spatial MOS correction can increase (3), and help improve RMS and RMSSS. Murphy (1988), Mon. Wea. Rev.

18 Verification of Probabilistic Categorical Forecasts:
The Ranked Probability Skill Score (RPSS) Epstein (1969), J. Appl. Meteor. RPSS measures cumulative squared error between categorical forecast probabilities and the observed categorical probabilities relative to a reference (or standard baseline) forecast. The observed categorical probabilities are 100% in the observed category, and 0% in all other categories. Where Ncat = 3 for tercile forecasts. The “cum” implies that the sum- mation is done for cat 1, then cat 1 and 2, then cat 1 and 2 and 3.

19 The higher the RPS, the poorer the forecast. RPS=0 means that
the probability was 100% given to the category that was observed. The RPSS is the RPS for the forecast compared to the RPS for a reference forecast such as one that gives climatological probabilities. RPSS > 0 when RPS for actual forecast is smaller than RPS for the reference forecast.

20 Suppose that the forecast probabilities for terciles for 15 stations in JFM
1998 in Colombia, and the observations were: forecast(%) obs(%) RPS calculation RPS=(0-.20)2+(0-.50)2+(1.-1.)2 = = .29 RPS=(0-.25)2+(0-.60)2+(1.-1.)2 = = .42 RPS=(0-.20)2+(0-.55)2+(1.-1.)2 = = .34 RPS=(0-.25)2+(1-.60)2+(1.-1.)2 = = .22 RPS=(0-.15)2+(0-.45)2+(1.-1.)2 = = .22 Finding RPS for reference (climatol baseline) forecasts: for 1st forecast, RPS(clim) = (0-.33)2+(0-.67)2+(1.-1.)2 = =.556 for 7th forecast, RPS(clim) = (0-.33)2+(1.-.67)2+(1.-1.)2 = =.222 for a forecast whose observation is “below” or “above”, PRS(clim)=.556

21 forecast(%) obs(%) RPS and RPSS(clim) RPSS
RPS= .29 RPS(clim)= (.29/.556) = .48 RPS= .42 RPS(clim)= (.42/.556) = .24 RPS= .42 RPS(clim)= (.42/.556) = .24 RPS= .34 RPS(clim)= (.34/.556) = .39 RPS= .22 RPS(clim)= (.22/.556) = .60 RPS= .42 RPS(clim)= (.42/.556) = .24 RPS= .22 RPS(clim)= (.22/.222) = .01 RPS= .42 RPS(clim)= (.42/.556) = .24 RPS= .34 RPS(clim)= (.34/.556) = .39 RPS= .42 RPS(clim)= (.42/.556) = .24 RPS= .22 RPS(clim)= (.22/.222) = .01 RPS= .22 RPS(clim)= (.22/.222) = .01 RPS= .22 RPS(clim)= (.22/.556) = .60 RPS= .42 RPS(clim)= (.42/.556) = .24 RPS= .42 RPS(clim)= (.42/.556) = .24 Finding RPS for reference (climatol baseline) forecasts: When obs=“below”, RPS(clim) = (0-.33)2+(0-.67)2+(1.-1.)2 = =.556 When obs=“normal”, RPS(clim)=(0-.33)2+(1.-.67)2+(1.-1.)2 = =.222 When obs=“above”, RPS(clim)= (0-.33)2+(0-.67)2+(1.-1.)2 = =.556

22 RPSS for various tercile probability forecasts,
when observation is “above”. forecast tercile Probabilities RPSS Note: issuing too-confident forecasts causes high penalty when incorrect. Skills come out best for “true” probs. “True” forecast probs have good reliability.

23 The likelihood score The likelihood score is the nth root of the product of the probabilities given for the event that was later observed. for example, using terciles, suppose 5 forecasts were given as follows, and the category in red was observed: The likelihood score disregards what prob- abilities were forecast for categories that did not occur. The likelihood score for this example (n=5) would be = = 0.40 This score could then be scaled such that would be 0%, and 1 would be 100%. A score of 0.40 would translate to ( ) / ( ) = 10.0%.

24 Relative Operating Characteristics (ROC) for Probabilistic Forecasts
Mason, I. (1982) Australian Met. Magazine The contingency table that ROC verification is based on: | Observation Observation | Yes No Forecast: Yes | O1 (hit) NO1 (false alarm) Forecast: NO | O2 (miss) NO2 (correct rejection) Hit Rate = 01 / (O1+O2) False Alarm Rate = NO1 / (NO1+NO2) The hit rate and false alarm rate are determined for descending categories of forecast probability. For high forecast probabilities, we hope hit rate rate will be high and false alarm rate low; and for low forecast probabilities, we hope hit rate will be low and false alarm rate will be high. For in-between probabilities, we expect some of each.

25 Example from Mason and Graham (2002), QJRMS, for eastern Africa
The curves are cumulative from left to right. For example, “20%” really means “100% + 90% +80% + ….. +20%”. Curves farther to the upper left show greater skill. no skill negative skill Example from Mason and Graham (2002), QJRMS, for eastern Africa OND simulations (observed SST forcing) using ECHAM3 AGCM

26 RELIABILITY ANALYSIS Above-Normal Below-Normal
Results of Bayesian multi-model ensemble are evidenced in an analysis of reliability (the correspondence between forecast probability and relative observed frequency of occurrence). Simple pooling (assignment of equal weights to all AGCMs) gives more reliability than that of individual AGCMs, but the Bayesian method produces even more reliability. Note that flattish lines show model overconfidence, and 45º line shows perfect reliability. Above-Normal Below-Normal perfect reliability perfect reliability Bayesian Pooled Observed relative Freq. Individual AGCM Observed relative Freq. Forecast probability Forecast probability (3-model) JAS Precipitation, 30S-30N RELIABILITY ANALYSIS from Goddard et al. 2003 EGS-AGU-EGU Joint Assembly, Nice, France, April

27 RELIABILITY DIAGRAM Perfect reliability

28 Use of Multiple verification scores is encouraged.
Different skill scores emphasize different aspects of skill. It is usually a good idea to use more than one score, and determine more than one aspect of skill. A reliability plot is also informative. Hit scores (such as Heidke) are increasingly being recognized as poor measures of probabilistic skill, since the probabilities are ignored (except for identifying which category has highest proba- bility). ROC, likelihood and RPSS are encouraged.

29 Tercile probabilities for various correlation skills and predictor
signal strengths (in SDs). Assumes Gaussian probability distri- bution. Forecast (F) signal = (Predictor Signal) x (Correl Skill). Correlation Skill Predictor Signal=0.0 Signal +0.5 Signal +1.0 Signal +1.5 Signal +2.0 0.00 F signal 0.00 33 / 33 / 33 0.20 33 / 34 / 33 F signal 0.10 29 / 34/ 37 F signal 0.20 26 / 33 / 41 F signal 0.30 23 / 33 / 45 F signal 0.40 20 / 31 / 49 0.30 33 / 35 / 33 F signal 0.15 27 / 34 / 38 22 / 33 / 45 F signal 0.45 17 / 31 / 51 F signal 0.60 14 / 29 / 57 0.40 32 / 36 / 32 25 / 35 / 40 18 / 33 / 49 13 / 30 / 57 F signal 0.80 9 / 25 / 65 0.50 31 / 38 / 31 F signal 0.25 22 / 37 / 42 F signal 0.50 14 / 33 / 53 F signal 0.75 9 / 27 / 64 F signal 1.00 5 / 21 / 74 0.60 30 / 41 / 30 18 / 38 / 44 10 / 32 / 58 F signal 0.90 5 / 23 / 72 F signal 1.20 2 / 15 / 83 0.70 27 / 45 / 27 F signal 0.35 13 / 41 / 46 F signal 0.70 6 / 30 / 65 F signal 1.05 2 / 17 / 81 F signal 1.40 1 / 8 / 91 0.80 24 / 53 / 24 8 / 44 / 48 2 / 25 / 73 0* / 10 / 90 F signal 1.60 0** / 3 / 97 * **0.04


Download ppt "Climate Predictability Tool (CPT)"

Similar presentations


Ads by Google