Download presentation
Presentation is loading. Please wait.
Published byMarybeth Miller Modified over 6 years ago
1
Credibility Session SPE-42 CAS Seminar on Ratemaking Tampa, Florida
March 2002
2
Purpose Today’s session is designed to encompass: Credibility in the context of ratemaking Classical and Bühlmann models Review of variables affecting credibility Formulas Practical techniques for applying credibility Methods for increasing credibility Complements of credibility
3
Methods, examples, and considerations
Outline Background Definition Rationale History Methods, examples, and considerations Limited fluctuation methods Greatest accuracy methods Bibliography
4
Background
5
Background Definition
Common vernacular (Webster): “Credibility:” the state or quality of being credible “Credible:” believable So, “the quality of being believable” Implies you are either credible or you are not In actuarial circles: Credibility is “a measure of the credence that…should be attached to a particular body of experience”-- L.H. Longley-Cook Refers to the degree of believability; a relative concept
6
Why do we need “credibility” anyway?
Background Rationale Why do we need “credibility” anyway? P&C insurance costs, namely losses, are inherently stochastic Observation of a result (data) yields only an estimate of the “truth” How much can we believe our data? What else can we believe? Consider a simple example...
7
Background Simple example
Car example: one car to a male, one to a female smash the male’s car ($58k) scratch the female’s car twice $1,000) What’s the average loss per person in the room? (600) What is the right premium to charge next year (ignore expenses)? (600?) What’s the correct premium if I tell you that last year you all paid $1000? (between 600 and 1000?) What if, in order to be competitive, we have to rate the policies using gender, age, make and model…? (men=1,160 ; women = 40) In general, how can we blend our observed data with our a priori beliefs to produce a forecast for next year?
8
Longley-Cook wrote a summary in 1962
Background History The CAS was founded in 1914, in part to help make rates for a new line of insurance -- Work Comp Early pioneers: Mowbray -- how many trials/results need to be observed before I can believe my data? Albert Whitney -- focus was on combining existing estimates and new data to derive new estimates New Rate = Credibility*Observed Data + (1-Credibility)*Old Rate Perryman (1932) -- how credible is my data if I have less than required for full credibility? Bayesian views resurrected in the 40’s, 50’s, and 60’s The CAS was founded in 1914, in part to help make rates for a new line of insurance -- Work Comp Coverage had been previously available in Europe Only Employer’s Liability had been available in the U.S. As data became available, how would you revise the initial rates? As such, credibility was uniquely American. This is no longer true. Mowbray -- how many trials/results do I have to observe before I can believe my data? Reference to classical statistics and statistical quality control: “How Extensive a Payroll Exposure is Necessary to Give Dependable Pure Premium?” Used a binomial claim count model Referred to as “limited fluctuation credibility” Albert Whitney -- focus was on combining existing estimates and new data to derive new estimates Called “Greatest Accuracy Credibility Perryman (1932) --how credible is my data if I have less than the full credible amount? Calculated partial credibilities using the “square root rule Z = (n/N)1/2 Bayesian methods Theory treats prior knowledge and data as views that need to be blended - less absolute than classical statistics key figures: Arthur Bailey (1950), Dropkin ( reviewed by Philbrick), Robert Bailey ( son of Arthur), LeRoy Simon (1962), Mayerson (1964), Buhlmann (1967), Buhlmann and Straub (1970) Longley-Cook wrote a summary in 1962
9
Limited Fluctuation Greatest Accuracy
Background Methods Limit the effect that random fluctuations in the data can have on an estimate Limited Fluctuation “Frequentist” Bayesian “Classical credibility” Greatest Accuracy Make estimation errors as small as possible You won’t see the word “frequentist” very often. It was coined by Arthur Bailey in It is synonymous with classical statistics. This class of credibility does not require Bayesian statistics nor does it require prior information (specified in distributional terms) Frequentist methods can be split into two categories Limited fluctuation, often called “classical credibility” classical credibility is a bad term, principally because it means something totally different in Europe. This has come to mean the procedure for establishing a full credibility standard first, then setting partial credibility via the square root rule. Greatest accuracy credibility While not Bayesian, per se, is close to Bayesian analysis. In fact, can be derived from Bayesian methods. I’ll go into each of these further, and in the process discuss examples and ways to enhance credibility. “Least Squares Credibility” “Empirical Bayesian Credibility” Bühlmann Credibility Bühlmann-Straub Credibility
10
Limited Fluctuation Credibility
11
Limited Fluctuation Credibility Description
“A dependable [estimate] is one for which the probability is high, that it does not differ from the [truth] by more than an arbitrary limit.” -- Mowbray How much data is needed for an estimate so that the credibility, Z, reflects a probability, P, of being within a tolerance, k%, of the true value?
12
Limited Fluctuation Credibility Derivation
New Estimate = (Credibility)(Data) + (1- Credibility)(Previous Estimate) E2 = Z*T + (1-Z)*E1 Add and subtract ZE[T] = Z*T + ZE[T] - ZE[T] + (1-Z)*E1 = (1-Z)*E1 + ZE[T] + Z*(T - E[T]) Sorry, this gets a little mathematical…if your not a math geek, try not to get bogged down in the details We start with data, T, and a previous estimate E1 (recall the car crash), and weight the two together with credibility Next we manipulate the formula Our new estimate can be decomposed into components representing the original estimate, truth, and error. The problem we have have is that truth and random error cannot be observed separately. So both get a weight of Z. We want the highest possible Z so that truth is emphasized as long as noise can be kept within acceptable bounds. This leads to a mathematical derivation: regroup Stability Truth Random Error
13
Limited Fluctuation Credibility Derivation (continued)
We wish to find the level of credibility, Z, ...such that the error, T-E[T], …is less than or equal to k% of the truth, E[T], …at least P% of the time Mathematically, this is expressed as Probability{Z(T-E[T]) < kE[T]} = P
14
Limited Fluctuation Credibility Mathematical formula for Z
Pr{Z(T-E[T]) < kE[T]} = P -or- Pr{T < E[T] + kE[T]/Z} = P E[T] + kE[T]/Z looks like a formula for a percentile: E[T] + zpVar[T]1/2 We wish to place bounds on the noise term so it’s absolute value is less than a selected percentage, P, of the true mean, E[T] Rearrange reference to a percentile; normal approximation solve In standard practice, which isn’t very good, this simplifies: -so- kE[T]/Z = zpVar[T]1/2 Z = kE[T]/zpVar[T]1/2
15
Limited Fluctuation Credibility Mathematical formula for Z (continued)
If we assume That we are dealing with an insurance process that has Poisson frequency, and Severity is constant or severity doesn’t matter Then E[T] = number of claims (N), and E[T] = Var[T], so: Solving for N (# of claims for full credibility, i.e., Z=1): Z = kE[T]/zpVar[T]1/2 becomes: Z = kE[T]1/2 /zp = kN1/2 /zp This makes a really convenient formula. So it’s used by everybody seeking a lazy solution. But it’s not a good solution. Why not? N = (zp/k)2
16
Limited Fluctuation Credibility Standards for full credibility
Claim counts required for full credibility based on the previous derivation: Remember, this derivation – and therefore this table – is only appropriate if: You are trying to estimate frequency Frequency is defined by a Poisson process There are enough claims to allow for a Normal approximation to the Poisson process
17
Limited Fluctuation Credibility Mathematical formula for Z II
Relaxing the assumption that severity doesn’t matter, let T = aggregate losses = (frequency)(severity) then E[T] = E[N]E[S] and Var[T] = E[N]Var[S] + E[S]2Var[N] Plugging these values into the formula Z = kE[T]/zpVar[T]1/2 and solving for N Z=1): I’ll leave the math to you unless you’re interested: Z = (k/z)E[N]E[S]/(E[N]Var[S]+E[S]2 Var[N])1/2 square, set z=1, 1 = (k/z)2 E[N]2E[S]2/(E[N]Var[S]+E[S]2Var[N]) so E[N]Var[S]+E[S]2Var[N] = (k/z)2E[N]2E[S]2 solve for E[N] (z/k)2E[N]Var[S]+E[S]2Var[N] = E[N] E[N]E[S] E[N]E[S]2 If frequency is Poisson, N = (z/k)2[1 +(s/m)2] the z/k term is the same as with frequency, but it is now augmented with the moments (cv) of the severity distribution. In my experience, in personal auto, this adjustment multiplied N by about In excess lines in can be an order of magnitude. N = (zp/k)2{Var[N]/E[N] + Var[S]/E[S]2}
18
Limited Fluctuation Credibility Partial credibility
Given a full credibility standard, Nfull, what is the partial credibility of a number N < Nfull? The square root rule says: Z = (N/ Nfull)1/2 For example, let Nfull = 1,082. If we have 500 claims: Z = (500/1082)1/2 = 68%
19
Limited Fluctuation Credibility Partial credibility (continued)
Full credibility standards:
20
Limited Fluctuation Credibility Increasing credibility
Per the formula, Z = (N/ Nfull)1/2 = [N/(zp/k)2]1/2 = kN1/2/zp Credibility, Z, can be increased by: Increasing N = get more data Increasing k = accept a greater margin of error Decrease zp = concede to a smaller P = be less certain
21
Limited Fluctuation Credibility Complement of credibility
Once partial credibility has been established, the complement of credibility, 1-Z, must be applied to something else. E.g., If the data analyzed is… A good complement is... Pure premium for a class Pure Premium for all classes Loss ratio for an individual Loss ratio for entire class risk Indicated rate change for a Indicated rate change for territory entire state Indicated rate change for Trend in loss ratio entire state
22
Limited Fluctuation Credibility Weaknesses
The strength of limited fluctuation credibility is its simplicity, therefore its general acceptance and use. But it has weaknesses… Establishing a full credibility standard requires arbitrary assumptions regarding P and k. Typical use of the formula based on the Poisson model is inappropriate for most applications. Partial credibility formula -- the square root rule -- is only approximate. Treats credibility as an intrinsic property of the data. Complement of credibility is highly judgmental.
23
Limited Fluctuation Credibility Example
Calculate the expected loss ratios as part of an auto rate review for a given state. Data: Loss Ratio Claims % 535 % 616 % 634 % 615 % 686 E.g., 81%(.60) + 75%(1-.60) 76.5%/75% -1 Credibility at: Weighted Indicated 1,082 5,410 Loss Ratio Rate Change 100% 60% % 4.8% 100% 75% % 2.0% Calculating expected loss ratios as part of an auto rate review for a given state. Assume 2,000 cars $2 million in premium frequency of about 1/3 about 600 claims per year [true mean loss ratio = 75% with standard deviation of 8%] Credibility could be applied to the percentage change. E.g., assign the 4.8% change a credibility of 0.60 and assign the complement (0.40) to, say, the expected loss cost increase. Note that the credibility doesn’t depend on the complement at all. I could have applied the residual weight to my weekly grocery bill. 3 year 81% 1,935 5 year 77% 3,086
24
Greatest Accuracy Credibility
25
Greatest Accuracy Credibility Derivation
Find the credibility weight, Z, that minimizes the sum of squared errors about the truth Z takes the form Z = n/(n+k) k takes the form k = s2/t2 where s2 = average variance of the territories over time, called the expected value of process variance (EVPV) t2 = variance across the territory means, called the variance of hypothetical means (VHM) s2 is analogous to a “within sum of squares” from ANOVA t2 is the “between sum of squares” The greatest accuracy or least squares credibility result is more intuitively appealing. It is a relative concept It is based on relative variances or volatility of the data There is no such thing as full credibility
26
Greatest Accuracy Credibility Derivation (continued)
EVPV VHM Class 1 Class 2
27
Greatest Accuracy Credibility Partial credibility
K=500
28
Greatest Accuracy Credibility Increasing credibility
Per the formula, Z = n n + s2 t2 Credibility, Z, can be increased by: Increasing n = get more data decreasing s2 = less variance within classes, e.g., refine data categories increase t2 = more variance between classes
29
Greatest Accuracy Credibility Illustration
Steve Philbrick’s target shooting example... B A S Next Steve devised this example in It’s all the more amazing because he was only 11 at the time. Imagine that four people (ABCD) are shooting at targets. We can’t see the targets and we don’t know who is who. All we are given is the shot pattern you see on the wall. One person, drawn at random and unknown to you, is going to shoot again. Where would you predict the next shot to land? E After the shot, S, where do you predict the next shot will be? A true Bayesian statistician would be concerned with re-calculating the probability that ABC or D fired the shot, and then re-calculating E based on the new information. We are more concerned about the next shot. Note that we no longer have an issue about the complement of credibility. The construct of the problem is such that the complement is the grand mean of the data. E D C
30
Greatest Accuracy Credibility Illustration (continued)
Which data exhibits more credibility? A B S Here’s the same illustration with different data points. Two things have changed: The groups A-D are each mush more dispersed (more variance within), and they overlap (less variance between) E Next C D
31
Greatest Accuracy Credibility Illustration (continued)
Higher credibility: less variance within, more variance between Class loss costs per exposure... D A B E C Lower credibility: more variance within, less variance between I like to think of this target shooting example along a number line, say for example, that the dots are loss costs for classes of business... D A B E C
32
Conclusion
33
Credibility Conclusion
Actuarial credibility is a relative measure of the believability of the data used in analysis The underlying math can be complex, but concepts are intuitive Credibility of the data increases with volume and uniqueness and decreases with the data’s volatility Credibility weighting of data increases the stability in estimates and improves accuracy
34
Bibliography
35
Bibliography Dean, C. Gary. An Introduction to Credibility. PCAS Herzog, Thomas. Introduction to Credibility Theory. Longley-Cook, L.H. “An Introduction to Credibility Theory,” PCAS, 1962 Mahler, Howard and C Gary Dean. “Chapter 8: Credibility,” Foundations of Casualty Actuarial Science. Mayerson, Jones, and Bowers. “On the Credibility of the Pure Premium,” PCAS, LV Philbrick, Steve. “An Examination of Credibility Concepts,” PCAS, 1981
36
Introduction to Credibility
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.