1 Loss Triangle Philosophy Leigh J. Halliwell, FCAS, MAAA Consulting Actuary CARe Seminar Boston, MA May 19, 2008 Leigh J. Halliwell,

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Correlation and regression
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
1 Regression Models & Loss Reserve Variability Prakash Narayan Ph.D., ACAS 2001 Casualty Loss Reserve Seminar.
The General Linear Model Or, What the Hell’s Going on During Estimation?
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
12 Multiple Linear Regression CHAPTER OUTLINE
11 Simple Linear Regression and Correlation CHAPTER OUTLINE
Math 3680 Lecture #19 Correlation and Regression.
CHAPTER 1: THE NATURE OF REGRESSION ANALYSIS
The General Linear Model. The Simple Linear Model Linear Regression.
1 Regression Models and Loss Reserving Leigh J. Halliwell, FCAS, MAAA EPIC Consulting, LLC Casualty Loss Reserve Seminar Las Vegas,
An Introduction to Stochastic Reserve Analysis Gerald Kirschner, FCAS, MAAA Deloitte Consulting Casualty Loss Reserve Seminar September 2004.
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
The Simple Linear Regression Model: Specification and Estimation
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
Bivariate Regression CJ 526 Statistical Analysis in Criminal Justice.
Note 14 of 5E Statistics with Economics and Business Applications Chapter 12 Multiple Regression Analysis A brief exposition.
Regression Hal Varian 10 April What is regression? History Curve fitting v statistics Correlation and causation Statistical models Gauss-Markov.
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 13 Introduction to Linear Regression and Correlation Analysis.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
1 MF-852 Financial Econometrics Lecture 6 Linear Regression I Roy J. Epstein Fall 2003.
THE NATURE OF REGRESSION ANALYSIS Al Muizzuddin F.
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Correlation 1. Correlation - degree to which variables are associated or covary. (Changes in the value of one tends to be associated with changes in the.
Simple Linear Regression Analysis
September 11, 2006 How Valid Are Your Assumptions? A Basic Introduction to Testing the Assumptions of Loss Reserve Variability Models Casualty Loss Reserve.
Objectives of Multiple Regression
SIMPLE LINEAR REGRESSION
Chapter 11 Simple Regression
Introduction to Regression Analysis. Two Purposes Explanation –Explain (or account for) the variance in a variable (e.g., explain why children’s test.
Correlation is a statistical technique that describes the degree of relationship between two variables when you have bivariate data. A bivariate distribution.
Introduction to Linear Regression
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 13-1 Introduction to Regression Analysis Regression analysis is used.
STA291 Statistical Methods Lecture LINEar Association o r measures “closeness” of data to the “best” line. What line is that? And best in what terms.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Medical Statistics (full English class) Ji-Qian Fang School of Public Health Sun Yat-Sen University.
1 A Linear Model of ULAE Leigh J. Halliwell, FCAS, MAAA Consulting Actuary Casualty Loss Reserve Seminar Atlanta, GA September 11,
Ch14: Linear Least Squares 14.1: INTRO: Fitting a pth-order polynomial will require finding (p+1) coefficients from the data. Thus, a straight line (p=1)
Dept of Bioenvironmental Systems Engineering National Taiwan University Lab for Remote Sensing Hydrology and Spatial Modeling STATISTICS Linear Statistical.
Least Squares Regression Remember y = mx + b? It’s time for an upgrade… A regression line is a line that describes how a response variable y changes as.
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Regression Chapter 14.
LESSON 6: REGRESSION 2/21/12 EDUC 502: Introduction to Statistics.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Multiple Regression The equation that describes how the dependent variable y is related to the independent variables: x1, x2, xp and error term e.
Lecture 8: Ordinary Least Squares Estimation BUEC 333 Summer 2009 Simon Woodcock.
1 G Lect 4W Multiple regression in matrix terms Exploring Regression Examples G Multiple Regression Week 4 (Wednesday)
Chapter 14 Introduction to Regression Analysis. Objectives Regression Analysis Uses of Regression Analysis Method of Least Squares Difference between.
Regression Models and Loss Reserving Leigh J. Halliwell, FCAS, MAAA Casualty Loss Reserve Seminar Minneapolis, MN September 19, 2000 Leigh J. Halliwell,
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
I. Statistical Methods for Genome-Enabled Prediction of Complex Traits OUTLINE THE CHALLENGES OF PREDICTING COMPLEX TRAITS ORDINARY LEAST SQUARES (OLS)
(5) Notes on the Least Squares Estimate
Regression Analysis Module 3.
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
3.1 Examples of Demand Functions
Ch3: Model Building through Regression
Chapter 5: The Simple Regression Model
Evgeniya Anatolievna Kolomak, Professor
Chapter 1: THE NATURE OF REGRESSION ANALYSIS
The regression model in matrix form
Some issues in multivariate regression
Simple Linear Regression
REGRESSION ANALYSIS 11/28/2019.
Presentation transcript:

1 Loss Triangle Philosophy Leigh J. Halliwell, FCAS, MAAA Consulting Actuary CARe Seminar Boston, MA May 19, 2008 Leigh J. Halliwell, FCAS, MAAA Consulting Actuary CARe Seminar Boston, MA May 19, 2008

2 Models Versus Methods “We will debate whether we should be giving guidance to actuaries about picking development factors, or whether we should be counseling them to drop chain-ladder methods in favor of some other technique.” (from the session description) “Inevitable failures [of actuarial judgment] will only … discredit the profession. Actuaries must not presume to judge what they cannot scientifically model.” (Halliwell, 2007, footnote 5) One reasonable model with good diagnostics is better than (i.e., more informative than) any number of methods. “We will debate whether we should be giving guidance to actuaries about picking development factors, or whether we should be counseling them to drop chain-ladder methods in favor of some other technique.” (from the session description) “Inevitable failures [of actuarial judgment] will only … discredit the profession. Actuaries must not presume to judge what they cannot scientifically model.” (Halliwell, 2007, footnote 5) One reasonable model with good diagnostics is better than (i.e., more informative than) any number of methods.

3 Linear (Regression) Models “Regression toward the mean” coined by Sir Francis Galton ( ). The real problem: Finding the Best Linear Unbiased Estimator (BLUE) of vector y 2, vector y 1 observed. y = X  + e. X is the design (regressor) matrix.  unknown; e unobserved, but (the shape of) its variance is known. For the proof of what follows see Halliwell [1997] “Regression toward the mean” coined by Sir Francis Galton ( ). The real problem: Finding the Best Linear Unbiased Estimator (BLUE) of vector y 2, vector y 1 observed. y = X  + e. X is the design (regressor) matrix.  unknown; e unobserved, but (the shape of) its variance is known. For the proof of what follows see Halliwell [1997]

4 The Formulation

5 The BLUE Solution

6 Remarks on the Linear Model Actuaries need to learn the matrix algebra. Excel OK; but statistical software is desirable. X 1 of is full column rank,  11 non-singular. Linearity Theorem: Model is versatile. My papers (cf. References) describe complicated versions. Actuaries need to learn the matrix algebra. Excel OK; but statistical software is desirable. X 1 of is full column rank,  11 non-singular. Linearity Theorem: Model is versatile. My papers (cf. References) describe complicated versions.

7 Regression toward the Mean To intercept or not to intercept?

8 What to do? Ignore it. Add an intercept. Barnett and Zehnwirth [1998] 10-13, notice that the significance of the slope suffers. The lagged loss may not be a good predictor. Intercept should be proportional to exposure. Explain the torsion. Leads to a better model? Ignore it. Add an intercept. Barnett and Zehnwirth [1998] 10-13, notice that the significance of the slope suffers. The lagged loss may not be a good predictor. Intercept should be proportional to exposure. Explain the torsion. Leads to a better model?

9 Galton’s Explanation Children's heights regress toward the mean. Tall fathers tend to have sons shorter than themselves. Short fathers tend to have sons taller than themselves. Height = “genetic height” + environmental error A son inherits his father’s genetic height:  Son’s height = father’s genetic height + error. A father’s height proxies for his genetic height. A tall father probably is less tall genetically. A short father probably is less short genetically. Excellent discussion in Bulmer [1979] Cf. also sportsci.org/resource/stats under “Regression to Mean.” Children's heights regress toward the mean. Tall fathers tend to have sons shorter than themselves. Short fathers tend to have sons taller than themselves. Height = “genetic height” + environmental error A son inherits his father’s genetic height:  Son’s height = father’s genetic height + error. A father’s height proxies for his genetic height. A tall father probably is less tall genetically. A short father probably is less short genetically. Excellent discussion in Bulmer [1979] Cf. also sportsci.org/resource/stats under “Regression to Mean.”

10 The Lesson for Actuaries Loss is a function of exposure. Losses in the design matrix, i.e., stochastic regressors (SR), are probably just proxies for exposures. Zero loss proxies zero exposure. The more a loss varies, the poorer it proxies. The torsion of the regression line is the clue. Reserving actuaries tend to ignore exposures – some even glad not to have to “bother” with them! SR may not even be significant. Loss is a function of exposure. Losses in the design matrix, i.e., stochastic regressors (SR), are probably just proxies for exposures. Zero loss proxies zero exposure. The more a loss varies, the poorer it proxies. The torsion of the regression line is the clue. Reserving actuaries tend to ignore exposures – some even glad not to have to “bother” with them! SR may not even be significant.

11 Example in Excel

12 References Barnett, Glen, and Ben Zehnwirth, “Best Estimates for Reserves,” PCAS LXXXVII (2000), Bulmer, M.G., Principles of Statistics, Dover, Halliwell, Leigh J., “Loss Prediction by Generalized Least Squares, PCAS LXXXIII (1996), “, “Statistical and Financial Aspects of Self-Insurance Funding,” Alternative Markets / Self Insurance, 1996, “, “Conjoint Prediction of Paid and Incurred Losses,” Summer 1997 Forum, “, “Chain-Ladder Bias: Its Reason and Meaning,” Variance, Fall 2007, Judge, George G., et al., Introduction to the Theory and Practice of Econometrics, Second Edition, Wiley, Barnett, Glen, and Ben Zehnwirth, “Best Estimates for Reserves,” PCAS LXXXVII (2000), Bulmer, M.G., Principles of Statistics, Dover, Halliwell, Leigh J., “Loss Prediction by Generalized Least Squares, PCAS LXXXIII (1996), “, “Statistical and Financial Aspects of Self-Insurance Funding,” Alternative Markets / Self Insurance, 1996, “, “Conjoint Prediction of Paid and Incurred Losses,” Summer 1997 Forum, “, “Chain-Ladder Bias: Its Reason and Meaning,” Variance, Fall 2007, Judge, George G., et al., Introduction to the Theory and Practice of Econometrics, Second Edition, Wiley, 1988.