Download presentation
Presentation is loading. Please wait.
Published byLee Lester Modified over 9 years ago
1
1 Data Quality: GIRO Working Party Paper & Methods to Detect Data Glitches CAS Annual Meeting Louise Francis Francis Analytics and Actuarial Data Mining, Inc. www.data-mines.com Louise.francis@data-mines.com
2
2 Objectives Review GIRO Data Quality Working Party Paper Discuss Data Quality in Predictive Modeling Focus on a key part of data preparation: Exploratory data analysis
3
3 GIRO Working Party Paper Literature Review Horror Stories Survey Experiment Actions Concluding Remarks
4
4 Insurance Data Management Association The IDMA is an American organisation which promotes professionalism in the Data Management discipline through education, certification and discussion forums The IDMA web site: Suggests publications on data quality, Describes a data certification model, and Contains Data Management Value Propositions which document the value to various insurance industry stakeholders of investing in data quality http://www.idma.org
5
5 Horror Stories – Non-Insurance Heart-and-Lung Transplant – wrong blood type Bombing of Chinese Embassy in Belgrade Mars Orbiter – confusion between imperial and metric units Fidelity Mutual Fund – withdrawal of dividend Porter County, Illinois – Tax Bill and Budget Shortfall
6
6 Horror Stories – Rating/Pricing Examples faced by ISO: Exposure recorded in units of $10,000 instead of $1,000 Large insurer reporting personal auto data as miscellaneous and hence missed from ratemaking calculations One company reporting all its Florida property losses as fire (including hurricane years) Mismatched coding for policy and claims data
7
7 Horror Stories - Katrina US Weather models underestimated costs Katrina by approx. 50% (Westfall, 2005) 2004 RMS study highlighted exposure data that was: Out-of-date Incomplete Mis-coded Many flood victims had no flood insurance after being told by agents that they were not in flood risk areas.
8
8 Data Quality Survey of Actuaries Purpose: Assess the impact of data quality issues on the work of general insurance actuaries 2 questions: percentage of time spent on data quality issues proportion of projects adversely affected by such issues
9
9 Results - Percentage of Time EmployerNo.MeanMedianMinMax Insurer/Reinsure r 1726.4%25.0%5.0%50.0% Consultancy1327.1%25.0%7.5%60.0% Other823.4%12.5%2.0%75.0% All3826.0%25.0%2.0%75.0%
10
10 Results - Percentage of Projects EmployerNoMeanMedianMinMax Insurer/Reinsure r 1527.9%20.0%5.0%60.0% Consultancy1343.3%35.0%10.0%100.0% Other822.6%20.0%1.0%50.0% All3632.3%30.0%1.0%100.0%
11
11 Survey Conclusions Data quality issues have a significant impact on the work of general insurance actuaries about a quarter of their time is spent on such issues about a third of projects are adversely affected The impact varies widely between different actuaries, even those working in similar organisations Limited evidence to suggest that the impact is more significant for consultants
12
12 Hypothesis Uncertainty of actuarial estimates of ultimate incurred losses based on poor data is significantly greater than that of good data
13
13 Data Quality Experiment Examine the impact of incomplete and/or erroneous data on actuarial estimate of ultimate losses and the loss reserves Use real data with simulated limitations and/or errors and observe the potential error in the actuarial estimates
14
14 Data Used in Experiment Real data for primary private passenger bodily injury liability business for a single no- fault state Eighteen (18) accident years of fully developed data; thus, true ultimate losses are known
15
15 Actuarial Methods Used Paid chain ladder models Incurred chain ladder models Frequency-severity models Inverse power curve for tail factors No judgment used in applying methods
16
16 Completeness of Data Experiments Vary size of the sample; that is, 1) All years 2) Use only 7 accident years 3) Use only last 3 diagonals
17
17 Data Error Experiments Simulated data quality issues 1. Misclassification of losses by accident year 2. Early years not available 3. Late processing of financial information 4. Paid losses replaced by crude estimates 5. Overstatements followed by corrections in following period 6. Definition of reported claims changed
18
18 Measure Impact of Data Quality Compare Estimated to Actual Ultimates Use Bootstrapping to evaluate effect of difference samples on results
19
19 Results – Experiment 1 More data generally reduces the volatility of the estimation errors
20
20 Results – Experiment 2 Extreme volatility, especially those based on paid data Actuaries ability to recognise and account for data quality issues is critical Actuarial adjustments to the data may never fully correct for data quality issues
21
21 Results – Bootstrapping Less dispersion in results for error free data Standard deviation of estimated ultimate losses greater for the modified data (data with errors) Confirms original hypothesis
22
22 Conclusions Resulting from Experiment Greater accuracy and less variability in actuarial estimates when: Quality data used Greater number of accident years used Data quality issues can erode or even reverse the gains of increased volumes of data: If errors are significant, more data may worsen estimates due to the propagation of errors for certain projection methods Significant uncertainty in results when: Data is incomplete Data has errors
23
23 Predictive Modeling & Exploratory Data Analysis
24
24 Modeling Process
25
25 Data Preprocessing
26
26 Exploratory Data Analysis Typically the first step in analyzing data Makes heavy use of graphical techniques Also makes use of simple descriptive statistics Purpose Find outliers (and errors) Explore structure of the data
27
27 Example Data Private passenger auto Some variables are: Age Gender Marital status Zip code Earned premium Number of claims Incurred losses Paid losses
28
28 Some Methods for Numeric Data Visual Histograms Box and Whisker Plots Stem and Leaf Plots Statistical Descriptive statistics Data spheres
29
29 Histograms Can do them in Microsoft Excel
30
30 Histograms of Age Variable Varying Window Size
31
31 Formula for Window Width
32
32 Example of Suspicious Value
33
33 Box and Whisker Plot
34
34 Plot of Heavy Tailed Data Paid Losses
35
35 Heavy Tailed Data – Log Scale
36
36 Box and Whisker Example
37
37 Descriptive Statistics Analysis ToolPak
38
38 Descriptive Statistics License Year has minimum and maximums that are impossible
39
39 Data Spheres: The Mahalanobis Distance Statistic
40
40 Screening Many Variables at Once Plot of Longitude and Latitude of zip codes in data Examination of outliers indicated drivers in Ca and PR even though policies only in one mid-Atlantic state
41
41 Records With Unusual Values Flagged
42
42 Categorical Data: Data Cubes
43
43 Categorical Data Data Cubes Usually frequency tables Search for missing values coded as blanks
44
44 Categorical Data Table highlights inconsistent coding of marital status
45
45 Screening for Missing Data
46
46 Blanks as Missing
47
47 Conclusions Anecdotal horror stories illustrate possible dramatic impact of data quality problems Data quality survey suggests data quality issues have a significant cost on actualial work Data Quality Working Party urges actuaries to become data quality advocated within their organizations Techniques from Exploratory Data Analysis can be used to detect data glitches
48
48 Library for Getting Started Dasu and Johnson, Exploratory Data Mining and Data Cleaning, Wiley, 2003 Francis, L.A., “Dancing with Dirty Data: Methods for Exploring and Cleaning Data”, CAS Winter Forum, March 2005, www.casact.orgwww.casact.org The Data Quality Working Party Paper will be Posted on the GIRO web site in about nine months
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.