When can accident years be regarded as development years? Glen Barnett, Ben Zehnwirth, Eugene Dubossarsky Speaker: Dr Glen Barnett Senior Research Statistician,

Slides:



Advertisements
Similar presentations
Part II – TIME SERIES ANALYSIS C3 Exponential Smoothing Methods © Angel A. Juan & Carles Serrat - UPC 2007/2008.
Advertisements

1 Individual claim loss reserving conditioned by case estimates Greg Taylor Gráinne McGuire James Sullivan Taylor Fry Consulting Actuaries University of.
1 Regression Models & Loss Reserve Variability Prakash Narayan Ph.D., ACAS 2001 Casualty Loss Reserve Seminar.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Best Estimates for Reserves Glen Barnett and Ben Zehnwirth or find us on
Munich Chain Ladder Closing the gap between paid and incurred IBNR estimates Dr. Gerhard Quarg Münchener Rück Munich Re Group.
CHAPTER ONE Matrices and System Equations
Mathematics. Matrices and Determinants-1 Session.
Non-life insurance mathematics
An Introduction to Stochastic Reserve Analysis Gerald Kirschner, FCAS, MAAA Deloitte Consulting Casualty Loss Reserve Seminar September 2004.
MF-852 Financial Econometrics
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #19 3/8/02 Taguchi’s Orthogonal Arrays.
1 Chain ladder for Tweedie distributed claims data Greg Taylor Taylor Fry Consulting Actuaries University of New South Wales Actuarial Symposium 9 November.
Review of Matrix Algebra
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #20 3/10/02 Taguchi’s Orthogonal Arrays.
Bootstrap Estimation of the Predictive Distributions of Reserves Using Paid and Incurred Claims Huijuan Liu Cass Business School Lloyd’s of London 11/07/2007.
Spreadsheets In today’s lesson we will look at:
Algebra Problems… Solutions Algebra Problems… Solutions © 2007 Herbert I. Gross Set 12 By Herbert I. Gross and Richard A. Medeiros next.
So are how the computer determines the size of the intercept and the slope respectively in an OLS regression The OLS equations give a nice, clear intuitive.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
September 11, 2006 How Valid Are Your Assumptions? A Basic Introduction to Testing the Assumptions of Loss Reserve Variability Models Casualty Loss Reserve.
Foundations of Computer Graphics (Fall 2012) CS 184, Lecture 2: Review of Basic Math
Curve Modeling Bézier Curves
Regression with 2 IVs Generalization of Regression from 1 to 2 Independent Variables.
1 Psych 5500/6500 Chi-Square (Part Two) Test for Association Fall, 2008.
Chapter 6 The Normal Probability Distribution
Chapter 6.7 Determinants. In this chapter all matrices are square; for example: 1x1 (what is a 1x1 matrix, in fact?), 2x2, 3x3 Our goal is to introduce.
More information will be available at the St. James Room 4 th floor 7 – 11 pm. ICRFS-ELRF will be available for FREE! Much of current discussion included.
A Beginner’s Guide to Bayesian Modelling Peter England, PhD EMB GIRO 2002.
Two Approaches to Calculating Correlated Reserve Indications Across Multiple Lines of Business Gerald Kirschner Classic Solutions Casualty Loss Reserve.
More on Stochastic Reserving in General Insurance GIRO Convention, Killarney, October 2004 Peter England and Richard Verrall.
Reinsurance of Long Tail Liabilities Dr Glen Barnett and Professor Ben Zehnwirth.
1999 CASUALTY LOSS RESERVE SEMINAR Intermediate Track II - Techniques
Best Estimates for Reserves Glen Barnett and Ben Zehnwirth.
Computer Science 1000 Spreadsheets III Permission to redistribute these slides is strictly prohibited without permission.
Matrices Matrices A matrix (say MAY-trix) is a rectan- gular array of objects (usually numbers). An m  n (“m by n”) matrix has exactly m horizontal.
2009/9 1 Matrices(§3.8)  A matrix is a rectangular array of objects (usually numbers).  An m  n (“m by n”) matrix has exactly m horizontal rows, and.
Reserve Variability – Session II: Who Is Doing What? Mark R. Shapland, FCAS, ASA, MAAA Casualty Actuarial Society Spring Meeting San Juan, Puerto Rico.
The two way frequency table The  2 statistic Techniques for examining dependence amongst two categorical variables.
ISCG8025 Machine Learning for Intelligent Data and Information Processing Week 3 Practical Notes Application Advice *Courtesy of Associate Professor Andrew.
NASSP Masters 5003F - Computational Astronomy Lecture 6 Objective functions for model fitting: –Sum of squared residuals (=> the ‘method of least.
Sampling Design and Analysis MTH 494 Lecture-22 Ossam Chohan Assistant Professor CIIT Abbottabad.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Special Topic: Matrix Algebra and the ANOVA Matrix properties Types of matrices Matrix operations Matrix algebra in Excel Regression using matrices ANOVA.
Two-Way (Independent) ANOVA. PSYC 6130A, PROF. J. ELDER 2 Two-Way ANOVA “Two-Way” means groups are defined by 2 independent variables. These IVs are typically.
Trees Example More than one variable. The residual plot suggests that the linear model is satisfactory. The R squared value seems quite low though,
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 14 th February 2013.
One Madison Avenue New York Reducing Reserve Variance.
Reserve Ranges, Confidence Intervals and Prediction Intervals Glen Barnett, Insureware David Odell, Insureware Ben Zehnwirth, Insureware.
Data Analysis, Presentation, and Statistics
Multivariate Statistics Matrix Algebra I Exercises Alle proofs veranderen in show by example W. M. van der Veld University of Amsterdam.
Regression. Outline of Today’s Discussion 1.Coefficient of Determination 2.Regression Analysis: Introduction 3.Regression Analysis: SPSS 4.Regression.
Part 4 – Methods and Models. FileRef Guy Carpenter Methods and Models Method is an algorithm – a series of steps to follow  Chain ladder, BF, Cape Cod,
Matrices and Determinants
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
Measuring Loss Reserve Uncertainty William H. Panning EVP, Willis Re Casualty Actuarial Society Annual Meeting, November Hachemeister Award Presentation.
MATRICES A rectangular arrangement of elements is called matrix. Types of matrices: Null matrix: A matrix whose all elements are zero is called a null.
Slide INTRODUCTION TO DETERMINANTS Determinants 3.1.
Copyright © Cengage Learning. All rights reserved. Normal Curves and Sampling Distributions 6.
CSE 167 [Win 17], Lecture 2: Review of Basic Math Ravi Ramamoorthi
FW364 Ecological Problem Solving Class 6: Population Growth
Advantages and Limitations of Applying Regression Based Reserving Methods to Reinsurance Pricing Thomas Passante, FCAS, MAAA Swiss Re New Markets CAS.
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 20th February 2014  
6 Normal Curves and Sampling Distributions
What is a Spreadsheet? A program that allows you to use data to forecast, manage, predict, and present information.
Exponential Smoothing
Presentation transcript:

When can accident years be regarded as development years? Glen Barnett, Ben Zehnwirth, Eugene Dubossarsky Speaker: Dr Glen Barnett Senior Research Statistician, Insureware

Outline of talk What do we mean by “the chain ladder”? The basic “transpose-invariance” result Demonstrating the result (outline of a simple proof) What does the result tell us? i) Structure: accident years vs development years ii) Number of parameters iii) Cross-classification structure and ordering What lessons are there for other ratio methods?

What do we mean by “the chain ladder”? In its standard form: loss development technique for cumulative & incurred arrays use volume-weighted average ratios (where “volume” is previous column) gives factor, b j = “sum of column”/“sum of previous” where sum is over observations present in both.

Chain ladder for incremental arrays You can think of the chain ladder as a way to forecast incremental arrays as well: - take an incremental array (say incremental paid) 1 cumulate across 2 compute ratios 3 forecast 4 difference back to incrementals ①② ③ ④ ⊝  ⃝

The basic result Produce two tables: 1) forecast an incremental array with the chain ladder (using the 4 steps) 2) transpose the array (interchange accident and development years), then forecast that with the chain ladder and transpose back Tables 1 and 2 are identical

Another way to think about it 1) forecast an incremental array with the chain ladder 2) take an incremental array and apply new steps: 1 cumulate down 2 compute ratios running down 3 forecast down 4 difference (up) back to incrementals Tables 1 and 2 are identical ① ② ③ ④⊝  ⃝

Demonstrating the result (i)show that an incremental forecast from the CL is the same as finding sums of incrementals in each region (A,B,C) and computing BC/A ** (ii) Note that the result is the same for a transposed array B C ij p ˆ A = B.C/A ij p ˆ ** Replace any future values in shaded regions with their CL forecasts

Step (i) - (a) first, show for next diagonal, C.L. ratio = (A+B)/A; previous cumulative = C  Forecast cumulative = (A+B)C/A  Forecast incremental = (A+B)C/A – C = B.C/A B C ij p ˆ A = B.C/A ij p ˆ

= B.C/A ij p ˆ B C A Replace any future values in shaded regions with their CL forecasts Step (i) - (b) for later diagonals, Note that forecast values already “follow the ratio”, so adding them in to A and B leaves B/A unchanged. Also, the next forecast is based on the previous cumulative forecast, so C also contains the incremental forecasts  Future forecast incremental = B.C/A

One advantage of this fact Easy in Excel to forecast incrementals directly Single formula can be pasted to each forecast cell Ratios can be computed from the last forecast row (aside)

Step (ii) Plainly BC/A = CB/A; So the calculation is the same for the transposed array (B and C merely interchange their roles). = B.C/A ij p ˆ B C A A B C = C.B/A ij p ˆ

What does the result tell us? We call this property “transpose invariance” (more strictly, “transpose-forecast commutativity) i) Structure: accident years vs development years does not differentiate between accident & development year directions – chain ladder treats them identically

Of course we know that development years are quite different from accident years! adjusted for trend in other direction raw data

What does this result tell us? ii) It also tells us that there are in fact parameters in both accident and development directions; (This has been a source of argument – e.g. Mack vs Renshaw & Verrall) we’re aware of the parameter corresponding to the column effect (the “B” effect) – it’s the “ratio” but we usually condition on (i.e. ignore) the row effect (the “C” effect); it’s a degree of freedom that the model has to fit the data – i.e. a parameter

What does this result tell us?  There are parameters in both accident and development directions: s  s triangle has 2s–1 parameters for the mean (Overparameterization)

iii) cross classification structure (two-way-ANOVA-like) – no account of ordering In a cross-classification structure, you can interchange the row labels (or the column labels) with no impact - but we know that order matters! Can easily tell which of these has had its labels scrambled

iii) cross classification structure (ctd) – in fact there’s abundant information in nearby acci yrs e.g. – in same dev. yr: If you left out a point, how would you guess what it was? - observations at same delay very informative.

iii) cross classification structure (ctd) – information in different dev. yrs: - nearby delays also informative (smooth trends) (could leave out whole development & still guess where it was)

iii) cross classification structure (ctd) information in nearby accident and development yrs… need to use more of that information!

Are there lessons for other ratio methods? Mack (93) and Murphy (94) are able to write a model that includes several ratio methods as wtd regression - chain ladder uses one particular set of weights  Conditionally on previous cumulative, can write a number of other methods as weighted chain ladder:

Are there lessons for other ratio methods? Example: “average development factor”: fix the development year (i.e. hold it constant) at j Let y i be the cumulative in accident year i, and let x i be the previous cumulative. b = 1/n  (y i /x i ) =  (w i y i ) /  (w i x i ), where w i = 1/x i Hence ave. devel. factor is a weighted chain ladder

Are there lessons for other ratio methods? …for many methods, a weighted version of this result still holds (weighted) two-way cross-classification structure - many of the problems carry over!

Are there lessons for other ratio methods? - Still parameters in both directions. 2s-1 parameters

Are there lessons for other ratio methods? - Still ignores location information in nearby accident and development years

s  s triangle: ratio methods use 2s–1 parameters for mean How many parameters needed to describe this:

Can describe shape of curve with say 2-3 parameters Can describe stable accident year level with 1 (many arrays are similarly simple) for this triangle, ratio methods use 20 parameters (and wastes those: ratios don’t predict the next increment for that array)

Effects of overparameterisation - fitting noise rather than signal - high parameter uncertainty - unstable forecasts (small change in data – large change in prediction I.e. projects and amplifies noise into the future)

Conclusions “Transpose invariance” is an important feature – serious implications for the chain ladder; – many of the lessons apply more generally

Lessons be aware of the structure of loss data – don’t ignore what you know need to be aware of the specific structure in a triangle – does the model succinctly describe the main features in the data? (diagnostics, model validation, parsimony)