Download presentation
Presentation is loading. Please wait.
Published byBraulio Toy Modified over 10 years ago
1
Backtesting of Stochastic Mortality Models: Kevin Dowd (CRIS, NUBS) Andrew J. G. Cairns (Heriot-Watt) David Blake (Pensions Institute, Cass Business School) Guy D. Coughlan (JPMorgan) David Epstein (JPMorgan) Marwa Khalaf-Allah (JPMorgan) October 2008
2
2 Plan for talk Background Backtesting framework Backtests – Contracting horizon – Expanding horizon – Rolling window Conclusions
3
3 Background – Stochastic mortality models – Limited data => Model risk – Ongoing study: 8 models – Part of a suite of four papers Model fitting Forecasting Goodness of fit Backtesting
4
4 Background: Backtesting – To set out a comprehensive framework to backtest forecast performance of mortality models Evaluation of forecasts against out-of-sample outcomes 6 models out of original 8 backtested
5
5 Models considered – Model M1 = Lee-Carter, no cohort effect – Model M2 = Renshaw-Haberman (2006) cohort effect generalisation of M1 – Model M3 = age-period-cohort model – Model M5 = CBD two-factor model, Cairns et al (2006), no cohort effect – Models M6 and M7: cohort-effect generalisations of CBD
6
6 6 models backtested
7
7 Motivation for present study – A model might Give a good fit to past data and Generate density forecasts that appear plausible ex ante – And still produce poor forecasts – Hence, it is essential to test performance of models against subsequently realised outcomes This is what backtesting is about
8
8 Backtesting framework Choose – Metric of interest E.g. mortality rates, survival rates, life expectancy, annuity prices etc. – Historical look-back window used to estimate model params – Forecast horizon or look-forward window for forecasts Implement – Tests of how well forecasts subsequently performed
9
9 Backtesting framework – We choose focus mainly on mortality rate as metric – We choose a fixed 10-year lookback window This seems to be emerging as the standard amongst practitioners – We examine a range of backtests: Over contracting horizons Over expanding horizons Over rolling fixed-length horizons Future mortality density tests
10
10 Backtesting framework – We consider forecasts both with and without parameter uncertainty – Parameter certain case: treat estimates of parameters as if known values – Parameter uncertain case: allows for uncertainty in parameters governing period and cohort effects – Results indicate it is very important to allow for parameter uncertainty
11
11 Contracting horizon Fixed forecasting date: 2006 Forecast 1: data from 1971-1980 Forecast 2: data from 1972-1981 … Forecast 26: data from 1996-2005 6 models England & Wales males ages 60-89 With and without parameter uncertainty
12
12 Contracting horizon: age 65
13
13 Contracting horizon: age 75
14
14 Contracting horizon: age 85
15
15 Conclusions so far Big difference between PC and PU forecasts PU prediction intervals usually considerably wider than PC ones M2B sometimes unstable
16
16 Expanding horizons Data from 1971-1980 Forecasts to – 1981 – 1982 –…–… – 2006
17
17 Prediction-Intervals from 1980: age 65
18
18 Prediction-Intervals from 1980: age 75
19
19 Prediction-Intervals from 1980: age 85
20
20 Expanding Horizon Conclusions PC models: too many lower exceedances PU models: lower exceedances much closer to expectations – Especially for M1, M7 and M3B – Suggests that PU forecasts are more plausible than PC ones Caution: 1 highly-correlated sample path! Negligible differences between PC and PU median predictions Very few upper exceedances
21
21 Expanding Horizon Conclusions Too few upper exceedances, and two many median and lower exceedances some bias, especially for PC forecasts Bias especially pronounced for PC forecasts Evidence of upward bias less clearcut for PU forecasts
22
22 Rolling Fixed Horizon Forecasts From now on, work with PU forecasts only Assume illustrative horizon = 15 years Data from 1971-1980 – Forecast to 1995 Data from 1972-1981 – Forecast to 1996 …… Data from 1982-1991 – Forecast to 2006
23
23 Model M1
24
24 Model M2B
25
25 Model M3B
26
26 Model M5
27
27 Model M6
28
28 Model M7
29
29 Tentative conclusions so far Rolling horizon charts broadly consistent with earlier results Some evidence of upward bias but not consistent across models or always especially compelling M2B again shows instability
30
30 Overall conclusions Study outlines a framework for backtesting forecasts of mortality models As regards individual models and this dataset: – M1, M3B, M5 and M7 perform well most of the time and there is little between them – M2B unstable – Of the Lee-Carter family of models, hard to choose between M1 and M3B – Of the CBD family, M7 seems to perform best
31
31 Two other points stand out In many but not all cases, and depending also on the model, there is evidence of an upward bias in forecasts – This is very pronounced for PC forecasts – This bias is less pronounced for PU forecasts PU forecasts are more plausible than the PC forecasts Very important: take account of parameter uncertainty regardless of the model one uses
32
32 References Cairns et al. (2007) “A quantitative comparison of stochastic mortality models using data from England & Wales and the United States.” Pensions Institute Discussion Paper PI-0701, March Cairns et al. (2008) “The plausibility of mortality density forecasts: An analysis of six stochastic mortality models.” Pensions Institute Discussion Paper PI-0801, April. Dowd et al. (2008a) “Evaluating the goodness of fit of stochastic mortality models.” Pensions Institute Discussion Paper PI-0802, September. Dowd et al. (2008b) “Backtesting stochastic mortality models: An ex-post evaluation of multi-year-ahead density forecasts.” Pensions Institute Discussion Paper PI-0803, September. These papers are also available at www.lifemetrics.com
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.