Presentation is loading. Please wait.

Presentation is loading. Please wait.

GI ROC Effectiveness of Reserving Methods working party GIRO workshop 23-26 September 2008.

Similar presentations


Presentation on theme: "GI ROC Effectiveness of Reserving Methods working party GIRO workshop 23-26 September 2008."— Presentation transcript:

1 GI ROC Effectiveness of Reserving Methods working party GIRO workshop 23-26 September 2008

2 Workshop agenda  More about our testing methodology  Recap of issues raised in plenary  Other discussion points

3 More about testing methodology

4 Separate testing streams (1) Effectiveness of the “pure” method Value added by actuarial judgement Value added by understanding the business Testing of mechanical operation of methods Testing by individual actuaries with limited background info Testing by individual actuaries with detailed background info

5 Separate testing streams (2) Mechanical testing  Macro-based  Focus on pseudo-data  Multiple year-ends  Many methods and variations  Test effectiveness of each method in isolation Manual testing  Individual actuaries  Focus real data  Multiple year-ends  Core methods  Test individual methods and overall selected results  2 subgroups: limited/detailed background info

6 Core reserving methods  Chain ladder (PCL, ICL)  Bornheutter-Ferguson (PBF, IBF)  ACPC-based methods (APC, AIC, PPCI, PPCF)  Case estimate-based methods (PCE)  Operational time (OpTime)  Probabilistic trend family (eg ICRFS)  Other (user-selected) methods…

7 Variations on standard methods Development factor selection basis  volume-weighted  time-weighted  unweighted  ex high/low  last 3/5/all years  tail factor extrapolation techniques BF IEULR selection basis  Cape Cod method  use of rating index  average of last few years  use of benchmark ULRs BF exposure measure  premium  vehicle-years (motor)  wage-roll (EL)  ultimate claim count Other variations  inflation-adjusted chain ladder  Incurred equivalents for ACPC techniques  alternative parameterisation of standard techniques

8 Recap

9

10

11

12

13 Discussion points…

14 Testing exercise – how was it for you?  Realistic vs artificial  Peer review (or absence thereof)  Mix of methods tested  Diagnostics provided  Understanding the business (or lack thereof)  Use of judgement vs reversion to mechanical  Do it again?

15 “Pseudo-data”  Algorithm developed by CAS Loss Simulation Model WP  Parameters required for, inter alia:  Exposure  Probability of loss/claim event  Reporting, inter-valuation & settlement delays  Changes in case estimates  Payment amounts compared to case estimates  Trends & seasonal effects  Correlations & clustering  Trying to reflect the way the real world operates  Pseudo-data can be used to isolate responses of methods to specific issues, by use of “control” datasets with defined variations

16 Mechanical testing  Projection of data based on pre-defined algorithms  Variation of 4 key components:  Development factor averaging  Tail factor extrapolation  IEULR selection basis  Claims inflation  Major uncertainty = tail factors

17 CAS Reserving Methods Research  Tail Factors Working Party  Review of existing literature and identification of methods more commonly used in the market.  Explanation of methods, advantages/disadvantages of each, and where they are used.  Testing exercise using real data  BF Initial Expected Losses Working Party  Research the methods used in the industry.  Compare methods which purely rely on the claims development triangle versus those using other (external) sources of information.  Strengths and weaknesses of each method, and when they are appropriate to use.  Both working parties are yet to publish formal reports of their work

18 A philosophical question  What do we mean by an “effective” method?  Which is more “effective”?  A method that frequently differs widely from the eventual outcome but, on average over many trials, comes very close to the eventual outcome; or  A method that has less variability from the eventual outcome, but on average over many trials is not as close to the answer; or  A method that gives a good answer at an early stage of development, but the accuracy of that answer doesn't improve over time  Different methods may be more effective in different circumstances  Development of a “method reliability index” versus graphical analysis of estimates

19 Some surprising conclusions…  We found it difficult to spot any clear difference in results between:  Qualified & students  Experienced & inexperienced  Territory  Judgemental projections & mechanical projections

20 GI ROC Effectiveness of Reserving Methods working party GIRO workshop 23-26 September 2008


Download ppt "GI ROC Effectiveness of Reserving Methods working party GIRO workshop 23-26 September 2008."

Similar presentations


Ads by Google