Reserve Uncertainty 1999 CLRS by Roger M. Hayne, FCAS, MAAA Milliman & Robertson, Inc.

Slides:



Advertisements
Similar presentations
Unearned Premium Reserves Change is in the Wind
Advertisements

©Towers Perrin Emmanuel Bardis, FCAS, MAAA Cane Fall 2005 meeting Stochastic Reserving and Reserves Ranges Fall 2005 This document was designed for discussion.
Introduction to Regression with Measurement Error STA431: Spring 2015.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
Variance reduction techniques. 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Reserve Risk Within ERM Presented by Roger M. Hayne, FCAS, MAAA CLRS, San Diego, CA September 10-11, 2007.
Page 1 Recording of this session via any media type is strictly prohibited. MATH Matters How Risk Managers Should Use an Actuarial Report Norm Hainlen.
Central Limit Theorem.
An Introduction to Stochastic Reserve Analysis Gerald Kirschner, FCAS, MAAA Deloitte Consulting Casualty Loss Reserve Seminar September 2004.
Chapter 8 Estimating Single Population Parameters
CS 589 Information Risk Management 6 February 2007.
P&C Reserve Basic HUIYU ZHANG, Principal Actuary, Goouon Summer 2008, China.
Class 5: Thurs., Sep. 23 Example of using regression to make predictions and understand the likely errors in the predictions: salaries of teachers and.
1 Ken Fikes, FCAS, MAAA Introduction to Casualty Actuarial Science Ken Fikes, FCAS, MAAA Director of Property & Casualty
Physics and Measurements.
Inference.ppt - © Aki Taanila1 Sampling Probability sample Non probability sample Statistical inference Sampling error.
Sampling Theory Determining the distribution of Sample statistics.
1 2. Reliability measures Objectives: Learn how to quantify reliability of a system Understand and learn how to compute the following measures –Reliability.
Severity Distributions for GLMs: Gamma or Lognormal? Presented by Luyang Fu, Grange Mutual Richard Moncher, Bristol West 2004 CAS Spring Meeting Colorado.
Inferential Statistics
Statistical Methods For Engineers ChE 477 (UO Lab) Larry Baxter & Stan Harding Brigham Young University.
Milliman USA Reserve Uncertainty by Roger M. Hayne, FCAS, MAAA Milliman USA Casualty Loss Reserve Seminar September 23-24, 2002.
1 Bayesian methods for parameter estimation and data assimilation with crop models Part 2: Likelihood function and prior distribution David Makowski and.
Loss Reserve Estimates: A Statistical Approach for Determining “Reasonableness” Mark R. Shapland, FCAS, ASA, MAAA EPIC Actuaries, LLC Casualty Loss Reserve.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 9 Section 1 – Slide 1 of 39 Chapter 9 Section 1 The Logic in Constructing Confidence Intervals.
Milliman Why Can’t The Accountants Get Reserves Right? by Roger M. Hayne, FCAS, MAAA 2005 CAS Spring Meeting.
Estimation of Statistical Parameters
Probability theory 2 Tron Anders Moger September 13th 2006.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Review and Preview This chapter combines the methods of descriptive statistics presented in.
Bootstrapping Identify some of the forces behind the move to quantify reserve variability. Review current regulatory requirements regarding reserves and.
Random Sampling, Point Estimation and Maximum Likelihood.
A Beginner’s Guide to Bayesian Modelling Peter England, PhD EMB GIRO 2002.
Generalized Minimum Bias Models
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
Theory of Probability Statistics for Business and Economics.
Chapter 08 Risk and Rate of Return
Modeling and Estimating Parameter Uncertainty 1999 DFA Seminar by Roger M. Hayne, FCAS, MAAA Milliman & Robertson, Inc.
The Common Shock Model for Correlations Between Lines of Insurance
Testing Models on Simulated Data Presented at the Casualty Loss Reserve Seminar September 19, 2008 Glenn Meyers, FCAS, PhD ISO Innovative Analytics.
CLOSING THE BOOKS WITH PARTIAL INFORMATION By Joseph Marker, FCAS, MAAA CLRS, Chicago, IL, September 2003.
Copyright © 2009 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
Day 3: Sampling Distributions. CCSS.Math.Content.HSS-IC.A.1 Understand statistics as a process for making inferences about population parameters based.
Estimating the Predictive Distribution for Loss Reserve Models Glenn Meyers Casualty Loss Reserve Seminar September 12, 2006.
Hidden Risks in Casualty (Re)insurance Casualty Actuaries in Reinsurance (CARe) 2007 David R. Clark, Vice President Munich Reinsurance America, Inc.
Statistical Methods II&III: Confidence Intervals ChE 477 (UO Lab) Lecture 5 Larry Baxter, William Hecker, & Ron Terry Brigham Young University.
1 Chapter 10: Introduction to Inference. 2 Inference Inference is the statistical process by which we use information collected from a sample to infer.
Discussion of Unpaid Claim Estimate Standard  Raji Bhagavatula  Mary Frances Miller  Jason Russ November 13, 2006 CAS Annual Meeting San Francisco,
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 18 Sampling Distribution Models.
The Cost of Financing Insurance Version 2.0 Glenn Meyers Insurance Services Office Inc. CAS Ratemaking Seminar March 8, 2002.
Reserve Variability – Session II: Who Is Doing What? Mark R. Shapland, FCAS, ASA, MAAA Casualty Actuarial Society Spring Meeting San Juan, Puerto Rico.
Ranges of Reasonable Estimates Charles L. McClenahan, FCAS, MAAA Iowa Actuaries Club, February 9, 2004.
On Predictive Modeling for Claim Severity Paper in Spring 2005 CAS Forum Glenn Meyers ISO Innovative Analytics Predictive Modeling Seminar September 19,
Estimation and Application of Ranges of Reasonable Estimates Charles L. McClenahan, FCAS, MAAA 2003 Casualty Loss Reserve Seminar.
Sampling and estimation Petter Mostad
STA 2023 Module 5 Discrete Random Variables. Rev.F082 Learning Objectives Upon completing this module, you should be able to: 1.Determine the probability.
Review of Statistics.  Estimation of the Population Mean  Hypothesis Testing  Confidence Intervals  Comparing Means from Different Populations  Scatterplots.
Stochastic Loss Reserving with the Collective Risk Model Glenn Meyers ISO Innovative Analytics Casualty Loss Reserving Seminar September 18, 2008.
2000 SEMINAR ON REINSURANCE PITFALLS IN FITTING LOSS DISTRIBUTIONS CLIVE L. KEATINGE.
A Stochastic Framework for Incremental Average Reserve Models Presented by Roger M. Hayne, PhD., FCAS, MAAA Casualty Loss Reserve Seminar September.
What’s the Point (Estimate)? Casualty Loss Reserve Seminar September 12-13, 2005 Roger M. Hayne, FCAS, MAAA.
A Random Walk Model for Paid Loss Development Daniel D. Heyer.
Reserve Ranges Roger M. Hayne, FCAS, MAAA C.K. “Stan” Khury, FCAS, MAAA Robert F. Wolf, FCAS, MAAA 2005 CAS Spring Meeting.
From “Reasonable Reserve Range” to “Carried Reserve” – What do you Book? 2007 CAS Annual Meeting Chicago, Illinois November 11-14, 2007 Mark R. Shapland,
Sampling and Sampling Distributions. Sampling Distribution Basics Sample statistics (the mean and standard deviation are examples) vary from sample to.
Probability Distributions ( 확률분포 ) Chapter 5. 2 모든 가능한 ( 확률 ) 변수의 값에 대해 확률을 할당하는 체계 X 가 1, 2, …, 6 의 값을 가진다면 이 6 개 변수 값에 확률을 할당하는 함수 Definition.
What’s Reasonable About a Range?
Session II: Reserve Ranges Who Does What
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Significant models of claim number Introduction
Presentation transcript:

Reserve Uncertainty 1999 CLRS by Roger M. Hayne, FCAS, MAAA Milliman & Robertson, Inc.

Reserves Are Uncertain? n Reserves are just numbers in a financial statement n What do we mean by “reserves are uncertain?” –Numbers are estimates of future payments »Not estimates of the average »Not estimates of the mode »Not estimates of the median –Not really much guidance in guidelines n Rodney’s presentation will deal with this more

Let’s Move Off the Philosophy n Should be more guidance in accounting/actuarial literature n Not clear what number should be booked n Less clear if we do not know the distribution of that number n There may be an argument that the more uncertain the estimate the greater the “margin” n Need to know distribution first

“Traditional” Methods n Many “traditional” reserve methods are somewhat ad-hoc n Oldest, probably development factor –Fairly easy to explain –Subject of much literature –Not originally grounded in theory, though some have tried recently –Known to be quite volatile for less mature exposure periods

“Traditional” Methods n Bornhuetter-Ferguson –Overcomes volatility of development factor method for immature periods –Needs both development and estimate of the final answer (expected losses) –No statistical foundation n Frequency/Severity (Berquist, Sherman) –Also ad-hoc –Volatility in selection of trends & averages

“Traditional” Methods n Not usually grounded in statistical theory n Fundamental assumptions not always clearly stated n Often not amenable to directly estimate variability n “Traditional” approach usually uses various methods, with different underlying assumptions, to give the actuary a “sense” of variability

Basic Assumption n When talking about reserve variability primary assumption is: Given current knowledge there is a distribution of possible future payments (possible reserve numbers) Given current knowledge there is a distribution of possible future payments (possible reserve numbers) n Keep this in mind whenever answering the question “How uncertain are reserves?”

Some Concepts n Baby steps first, estimate a distribution n Sources of uncertainty: –Process (purely random) –Parameter (distributions are correct but parameters unknown) –Specification/Model (distribution or model not exactly correct) n Keep in mind whenever looking at methods that purport to quantify reserve uncertainty

Why Is This Important? n Consider an example n “Usual” development factor projection method n Assume: –Reserves can be estimated by development factor method –Age-to-age factors lognormal –Age-to-age factors independent –You can estimate age-to-age parameters using observed factors

Conclusions n Use “customary” parameterization of lognormal (based on transformed normal) n Parameters for distribution of age-to-age factors can be estimated using: –  i = Average of logs of observed age-to-age factors –  i 2 = (Sample corrected) variance of logs of observed age-to-age factors

Conclusions n Given assumptions distributions of age-to-ultimate factors are lognormal with parameters: – i– i– i– i – i2– i2– i2– i2 n Given amounts to date one derives a distribution of possible future payments for one exposure year n Convolute years to get distribution of total reserves

Sounds Good -- Huh? n Relatively straightforward n Easy to implement n Gets distributions of future payments n Job done -- yes? n Not quite n Why not?

An Example n Apply method to paid and incurred development separately n Consider resulting distributions n What does this say about the distribution of reserves? n Which is correct?

A “Real Life” Example

What Happened? n Conclusions follow unavoidably from assumptions n Conclusions contradictory n Thus assumptions must be wrong n Independence of factors? Not really (there are ways to include that in the method) n What else?

What Happened? n Obviously the two data sets are telling different stories n What is the range of the reserves? –Paid method? –Incurred method? –Extreme from both? –Something else? n Main problem -- the method addresses only one method under specific assumptions

What Happened? n Not process (that is measured by the distributions themselves) n Is this because of parameter uncertainty? n No, can test this statistically (from normal distribution theory) n If not parameter, what? What else? n Model/specification uncertainty

Why Talk About This? n Almost every paper in reserve distributions considers –Only one method –Applied to one data set n Only conclusion: distribution of results from a single method n Not distribution of reserves

Discussion n Some proponents of some statistically- based methods argue analysis of residuals the answer n Still does not address fundamental issue; model and specification uncertainty n At this point there does not appear much (if anything) in the literature with methods addressing multiple data sets

Moral of Story n Before using a method, understand underlying assumptions n Make sure what it measures what you want it to n The definitive work may not have been written yet n Casualty liabilities very complex, not readily amenable to simple models

All May Not Be Lost n Not presenting the definitive answer n More an approach that may be fruitful n Approach does not necessarily have “single model” problems in others described so far n Keeps some flavor of “traditional” approaches n Some theory already developed by the CAS (Committee on Theory of Risk, Rodney Kreps, Chairman)

Collective Risk Model n Basic collective risk model: –Randomly select N, number of claims from claim count distribution (often Poisson, but not necessary) –Randomly select N individual claims, X 1, X 2, …, X N –Calculate total loss as T =  X i n Only necessary to estimate distributions for number and size of claims n Can get closed form expressions for moments (under suitable assumptions)

Adding Parameter Uncertainty n Heckman & Meyers added parameter uncertainty to both count and severity distributions n Modified algorithm for counts: –Select  from a Gamma distribution with mean 1 and variance c (“contagion” parameter) –Select claim counts N from a Poisson distribution with mean  –Select claim counts N from a Poisson distribution with mean  –If c 0, N is negative binomial

Adding Parameter Uncertainty n Heckman & Meyers also incorporated a “global” uncertainty parameter n Modified traditional collective risk model –Select  from a distribution with mean 1 and variance b –Select N and X 1, X 2, …, X N as before –Calculate total as T =   X i n Note  affects all claims uniformly

Why Does This Matter? n Under suitable assumptions the Heckman & Meyers algorithm gives the following: –E(T) = E(N)E(X) –Var(T)= (1+b)E(X 2 )+ 2 (b+c+bc)E 2 (X) n Notice if b=c=0 then –Var(T)= E(X 2 ) –Average, T/N will have a decreasing variance as E(N)= is large (law of large numbers)

Why Does This Matter? n If b  0 or c  0 the second term remains n Variance of average tends to (b+c+bc)E 2 (X) n Not zero n Otherwise said: No matter how much data you have you still have uncertainty about the mean n Key to alternative approach -- Use of b and c parameters to build in uncertainty

If It Were That Easy … n Still need to estimate the distributions n Even if we have distributions, still need to estimate parameters (like estimating reserves) n Typically estimate parameters for each exposure period n Problem with potential dependence among years when combining for final reserves

CAS To The Rescue n CAS Committee on Theory of Risk commissioned research into –Aggregate distributions without independence assumptions –Aging of distributions over life of an exposure year n Paper on the first finished, second nearly so n Will help in reserve variability n Sorry, do not have all the answers yet