Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Mining to Predict and Prevent Errors in Health Insurance Claims Processing Mohit Kumar, Rayid Ghani and Zhu-Song Mei Copyright © 2010 Accenture All.

Similar presentations


Presentation on theme: "Data Mining to Predict and Prevent Errors in Health Insurance Claims Processing Mohit Kumar, Rayid Ghani and Zhu-Song Mei Copyright © 2010 Accenture All."— Presentation transcript:

1 Data Mining to Predict and Prevent Errors in Health Insurance Claims Processing Mohit Kumar, Rayid Ghani and Zhu-Song Mei Copyright © 2010 Accenture All Rights Reserved. Accenture, its logo, and High Performance Delivered are trademarks of Accenture.

2 Accenture Technology Labs
R&D Groups in 4 Locations Consulting & Services company 180,000 people in over 50 countries Chicago Silicon Valley Sophia Antipolis Bangalore Applied Research Motivated by real business problems Focus Areas include: Machine Learning, Data Mining Software Engineering Collaboration & Knowledge Management Cloud/Distributed Computing Green Computing Biometrics

3 Motivation Inefficiencies in the healthcare insurance process result in large monetary losses affecting corporations and consumers $91 billion over-spent in US every year on Health Administration and Insurance McKinsey study’ Nov 2008 131 percent increase in insurance premiums over past 10 years To put it in perspective with other research areas; we heard in the Plenary Invited Talk that online ad market is projected to be $48 billion in 2011 & $67 billion in 2013 $650 billion overall more than

4 Health Insurance Claim Process

5 Motivation Inefficiencies in the healthcare process result in large monetary losses affecting corporations and consumers $91 billion over-spent in US every year on Health Administration and Insurance (McKinsey study’ Nov 2008) 131 percent increase in insurance premiums over past 10 years Claim payment errors drive a significant portion of these inefficiencies Increased administrative costs and service issues of health plans Overpayment of Claims - direct loss Underpayment of Claims – loss in interest payment for insurer, loss in revenue for provider Some statistics 33% of all the workforce is involved in taking care of these errors For 6 million member insurance plan, $400 million identified overpayments Source: [Anand and Khots, 2008] For large (10 million+) insurance plan, estimated $1 billion in loss of revenue Source: Discussion with domain experts

6 Early Rework Detection –How its done today
Random Audits for Quality Control Claims Database Random Samples Manual Audits Auditors Extremely Low Hit Rates Long audit times due to fully manual audits

7 Early Rework Detection – Hypothesis and Rule based audits
Database Queries Claims Database Generate Expert Hypotheses Hypothesis- based Audits Auditors Better Hit Rates but still lot of manual effort in discovering, building, updating, executing, and maintaining the hypotheses PROBLEM Setup/Characteristics Identify rework claims before payment Identify a wide variety of rework types, not limited to manual rules Flag suspect claims with enough accuracy and explain the reason for error to make a secondary pre-pay audit practical Adapt to changes in environment

8 Problem Formulation Classification problem
Payment Errors or Not Use confidence score for Ranking to prioritize the claims to be reviewed Alternate formulations Ranking problem Multi class classification to predict the error category Multi instance modeling Characteristics Skewed class distribution (Rare events) Biased sampling of labeled data Concept drift Expensive domain experts

9 Feature Design Raw features Derived Features
Statistical (Avg, Min, Max, STDs) STDev of Charges/Paid amounts Most features are client independent hence generalizable Interesting features: STDevs are significant

10 Classification Algorithms
Domain characteristics High dimensional data Sparse data Fast training, updating and scoring required Ability to generate explanation for domain experts Selected Fast Linear SVMs Distance from margin is used as the ranking score As SVM’s are not able to handle categorical features, convert categorical to boolean features 100k-1Million features in typical insurance data High dimensional Sparse Fast training, updating and scoring SVMs: svm perf, pegasos, sofia

11 Data Insurance company 1 Insurance company 2 Duration 2 years
Number of claims 3.5 million 23 million Labeled claims 121k (49k errors) 380k (247k errors) Number of Features 110,000 ~1 million Talk about labeled data distribution Labeled data comes from several systems (qa, provider) 5-10% in real life System needs to take that into account

12 Offline Evaluation Metric
Percentile 10 Audit Capacity: In production, only 5-10 % of all the claims (~3.5 million) can be manually reviewed Offline experiments critical for model selection, tuning, and calibration before deployment Also useful to give evidence that the system works before deployment can be considered Interesting Anecdote: How to explain these results? Precision at 10 (precision vs recall), to hit rate, and catch rate, and audit rate

13 Experimental Setup & Infrastructure
Run experiments to pick optimal parameters varying: SVM parameters Feature selection Temporal sample selection Claim level vs line level classification 1000s of experiments Fast training time to do automatic model selection versus intuiion Adapt to changing environment 7 generalize across clients

14 Results 93% precision in the top 10% examined claims
23% of Error claims are found in top 10% examined claims

15 Estimating performance on Unlabeled Data
~40% of known error claims are discovered in the top 10% examined claims Mix test data with Unlabeled data Rank the entire set and measure the recall of the labeled test set More offline experiments…to check the scalability of the system & simulating live deployment .. realistic numbers than just labeled data ~25% of known correct claims are in the bottom 10%

16 Live evaluation (with auditors)
Insurance Company 1 Audit time: 20 min – 1 hour Gave a sample of 100 claims to auditor Precision: 65% Pilot deployment (4 weeks) : Insurance Company 2 Audit time: 4 min – 10 min Total claims audited: 1000 Precision: 29-55% based on audit strategy First evaluation was a sanity check. Second one was a pilot deployment Precision lower than offline results but much higher than 5-10% that is the current performance These numbes reflect $10-$25 million saving/yr So far the results were for a static system disregarding the timing information. When considering real deployments, the system has to ‘look ahead’ Prep for concept drift: The observation was that if we split the data temporally the performance goes down significantly compared to random 70/30 split

17 Concept Drift Diagnostics for determining whether Concept Drift is present What is the optimal time window to train the system? Most recent 2-3 months data gives the best performance

18 System Demo EOB attached to claim showing patient responsibility of $ insurer incorrectly paid as primary. Claim overpaid by $

19 Challenges / Recent work
Concept Drift Active Learning Interactive cost-sensitive approaches Alternative formulations Ranking Multi-instance Multi-class

20 Summary Payment errors result in large monetary losses in the healthcare system Our approach is able to accurate detect these errors and help fix them efficiently Current estimates suggest $10-$25million/year savings for typical insurers in the US Currently being industrialized for deployment Currently looking for people

21 Questions


Download ppt "Data Mining to Predict and Prevent Errors in Health Insurance Claims Processing Mohit Kumar, Rayid Ghani and Zhu-Song Mei Copyright © 2010 Accenture All."

Similar presentations


Ads by Google