Download presentation
Presentation is loading. Please wait.
Published byMerry Golden Modified over 6 years ago
1
DISCUSSION OF MINIMUM DISTANCE ESTIMATION OF LOSS DISTRIBUTIONS BY STUART A. KLUGMAN AND A. RAHULJI PARSA CLIVE L. KEATINGE
2
KLUGMAN AND PARSA’S BASIC ASSERTION
Maximum likelihood estimation is asymptotically optimal in the sense that parameter estimates have minimum asymptotic variance, BUT Minimum distance (weighted least squares) estimation can be tailored to reflect the goals of the analysis (e.g., obtaining a close fit in the tail by skewing the weights toward the tail).
3
KEATINGE’S RESPONSE Minimum distance estimation is a clumsy remedy for a model that is not flexible enough. ALTERNATIVES: Fit a parametric distribution only to the upper section of the data and use the empirical distribution below that. Use the semiparametric mixed exponential distribution.
4
Minimum distance estimation can proceed using the cumulative distribution function or the limited expected value function. With the cumulative distribution function, if one selects weights to minimize the asymptotic variance, one ends up with parameter estimates very close to or identical to the grouped maximum likelihood estimates.
5
Maximum likelihood asymptotic covariance matrix
Example 1—6656 general liability claims fit to a Pareto distribution Maximum likelihood asymptotic covariance matrix Minimum distance (limited expected values) asymptotic covariance matrix —Klugman and Parsa’s weights
7
Table 1 shows the weights that were used compared with optimized weights.
Note that the overweighting in the tail results in the substantially higher asymptotic variances.
9
Table 2 shows fits to Pareto and mixed exponential distributions.
Most modelers would probably prefer the minimum distance Pareto to the maximum likelihood Pareto, because it provides a much closer fit in the tail at a modest cost in terms of the fit low in the distribution. This would be an implicit acknowledgement that the assumption that the data comes from a Pareto distribution is not appropriate. Otherwise, one would prefer the estimator with the smaller asymptotic variances. The mixed exponential distributions fit very well over the entire range of the data. The minimum distance estimator provides a closer fit because it uses empirical limited expected values directly, whereas the maximum likelihood estimator uses the number of losses that fall in each interval.
10
Example 2—463 medical malpractice claim report lags truncated from above and fit to a Burr distribution Maximum likelihood asymptotic covariance matrix Minimum distance (cumulative distribution function) asymptotic covariance matrix —Klugman and Parsa’s weights
12
Table 3 shows the weights that were used compared with optimized weights.
14
Table 4 shows fits to Burr and Weibull distributions.
If one believes a Burr distribution is appropriate, one should prefer the maximum likelihood or minimum chi-square estimates, since they have smaller asymptotic variances. None of the distributions provides a particularly good fit very low in the distribution. If one does not believe that a Burr distribution is appropriate over the entire range of the data, one could fit that distribution only above a certain point and use an empirical distribution below that. The mixed exponential distribution always has a mode of zero, and since the data clearly shows a mode significantly greater than zero, the mixed exponential would not fit well over the entire range of the data. However, one could fit the mixed exponential to the section of the distribution to the right of the mode.
15
95% confidence intervals for the number of claims that will be reported after Lag 168:
Burr MLE 72 +/- 57 Burr MinDist 59 +/- 61 Burr MinChiSq 102 +/- 89 Weibull MLE 4 +/- 3 Confidence intervals depend on the assumption that a particular distribution is appropriate over the entire range of the distribution, including the portion for which we do not have data. Extrapolation is likely to lead to a very unreliable estimate.
16
The main purported advantage of minimum distance estimation is that, through adjustment of the weights, it can provide a closer fit to the parts of the distribution that are of the most interest. However, this leads to an estimator with a larger variance than the maximum likelihood estimator, and if one believes that the model one is using is appropriate, one should prefer the estimator with the smaller variance.
17
Minimum distance estimation would be useful in situations where maximum likelihood estimation is not feasible, such as when limited expected values are the only data available. However, in general, I see little reason to prefer it to maximum likelihood estimation.
18
KEATINGE’S RESPONSE Minimum distance estimation is a clumsy remedy for a model that is not flexible enough. ALTERNATIVES: Fit a parametric distribution only to the upper section of the data and use an empirical distribution below that. Use the semiparametric mixed exponential distribution.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.