Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to.

Slides:



Advertisements
Similar presentations
Mustafa Cayci INFS 795 An Evaluation on Feature Selection for Text Clustering.
Advertisements

Brief introduction on Logistic Regression
Hierarchical Clustering, DBSCAN The EM Algorithm
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
Learning Algorithm Evaluation
Data Mining Cluster Analysis: Advanced Concepts and Algorithms
Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to.
Data Mining Classification: Alternative Techniques
Data Mining Anomaly Detection
Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Minqi Zhou © Tan,Steinbach, Kumar Introduction to Data Mining.
Mining Distance-Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule Stephen D. Bay 1 and Mark Schwabacher 2 1 Institute for.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
Data Mining Classification: Naïve Bayes Classifier
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to.
Anomaly Detection brief review of my prospectus Ziba Rostamian CS590 – Winter 2008.
Anomaly Detection. Anomaly/Outlier Detection  What are anomalies/outliers? The set of data points that are considerably different than the remainder.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by.
Jeff Howbert Introduction to Machine Learning Winter Anomaly Detection Some slides taken or adapted from: “Anomaly Detection: A Tutorial” Arindam.
Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to.
Outlier Detection Using k-Nearest Neighbour Graph Ville Hautamäki, Ismo Kärkkäinen and Pasi Fränti Department of Computer Science University of Joensuu,
STA Lecture 161 STA 291 Lecture 16 Normal distributions: ( mean and SD ) use table or web page. The sampling distribution of and are both (approximately)
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
Lecture 20: Cluster Validation
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
Prediction of Molecular Bioactivity for Drug Design Experiences from the KDD Cup 2001 competition Sunita Sarawagi, IITB
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Statistical Inference (By Michael Jordon) l Bayesian perspective –conditional perspective—inferences.
1 E. Fatemizadeh Statistical Pattern Recognition.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by.
Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to.
Data Mining Anomaly Detection © Tan,Steinbach, Kumar Introduction to Data Mining.
Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to.
CLUSTER ANALYSIS Introduction to Clustering Major Clustering Methods.
Data Mining Anomaly/Outlier Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Lecture 7: Outlier Detection Introduction to Data Mining Yunming Ye Department of Computer Science Shenzhen Graduate School Harbin Institute of Technology.
1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 12 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.
COMP5331 Outlier Prepared by Raymond Wong Presented by Raymond Wong
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to.
Data Mining Anomaly/Outlier Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Bayesian decision theory: A framework for making decisions when uncertainty exit 1 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Cluster Analysis This lecture node is modified based on Lecture Notes for Chapter.
DATA MINING: CLUSTER ANALYSIS (3) Instructor: Dr. Chun Yu School of Statistics Jiangxi University of Finance and Economics Fall 2015.
Anomaly Detection Carolina Ruiz Department of Computer Science WPI Slides based on Chapter 10 of “Introduction to Data Mining” textbook by Tan, Steinbach,
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
1 CSE 881: Data Mining Lecture 22: Anomaly Detection.
Evaluating Classifiers
Data Mining: Concepts and Techniques
Ch8: Nonparametric Methods
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 12 —
Clustering Evaluation The EM Algorithm
Lecture Notes for Chapter 9 Introduction to Data Mining, 2nd Edition
Data Mining Classification: Alternative Techniques
Data Mining Anomaly Detection
Outlier Discovery/Anomaly Detection
Critical Issues with Respect to Clustering
Data Mining Anomaly/Outlier Detection
Lecture 14: Anomaly Detection
CSE572: Data Mining by H. Liu
Data Mining Anomaly Detection
Data Mining Anomaly/Outlier Detection
Data Mining Anomaly Detection
Presentation transcript:

Data Mining Anomaly Detection Lecture Notes for Chapter 10 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Anomaly/Outlier Detection l What are anomalies/outliers? –The set of data points that are considerably different than the remainder of the data l Variants of Anomaly/Outlier Detection Problems –Given a database D  find all the data points x  D with anomaly scores greater than some threshold t  find all the data points x  D having the top-k largest anomaly scores  containing mostly normal (but unlabeled) data points, and a test point x, compute the anomaly score of x with respect to D l Applications: –Credit card fraud detection, telecommunication fraud detection, network intrusion detection, fault detection

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Causes of anomalies l Data from different classes –Hawkins: differs so much from other observations  Could be generated by a different mechanism l Natural Variation –Some people are very tall/short/… l Data Measurement and Collection Errors

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Importance of Anomaly Detection Ozone Depletion History l In 1985 three researchers (Farman, Gardinar and Shanklin) were puzzled by data gathered by the British Antarctic Survey showing that ozone levels for Antarctica had dropped 10% below normal levels l Why did the Nimbus 7 satellite, which had instruments aboard for recording ozone levels, not record similarly low ozone concentrations? l The ozone concentrations recorded by the satellite were so low they were being treated as outliers by a computer program and discarded! Sources:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Use of class labels l Class labels: normal, anomaly l Supervised AD –Both class labels are available –Not usually considered as AD — just Classification l Unsupervised AD –No class labels –Discussed in this chapter l Semi-supervised AD –All training instances are known normal –Generate an anomaly score for a test instance

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Issues l Number of attributes used to define an anomaly –2-foot tall is common, 200-pound heavy is common –2-foot and 200-pound together is not common l Global versus local perspective –6.5-foot is unusually tall, but not in basketball teams. l Degree/score of anomaly l Identify one at a time vs multiple at once –One: masking—some anomalies hide the rest –Multiple: swamping—some normal objects got lumped into the anomaly group l Evaluation l Efficiency

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Anomaly Detection l Challenges –How many outliers are there in the data? –Method is unsupervised  Validation can be quite challenging (just like for clustering) –Finding needle in a haystack l Working assumption: –There are considerably more “normal” observations than “abnormal” observations (outliers/anomalies) in the data

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Anomaly Detection Schemes l General Steps –Build a profile of the “normal” behavior  Profile can be patterns or summary statistics for the overall population –Use the “normal” profile to detect anomalies  Anomalies are observations whose characteristics differ significantly from the normal profile l Types of anomaly detection schemes –Graphical & Statistical-based –Distance-based –Density-based –Clustering-based

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Graphical Approaches l Boxplot (1-D), Scatter plot (2-D), Spin plot (3-D) l Limitations –Time consuming –Subjective

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Convex Hull Method l Extreme points are assumed to be outliers l Use convex hull method to detect extreme values l What if the outlier occurs in the middle of the data?

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Anomaly/Outlier Detection l Proximity-based (10.3) l Density-based (10.4) l Clustering-based (10.5) l Statistical (10.1)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Nearest-Neighbor Based Approach l Approach: –Compute the distance between every pair of data points –There are various ways to define outliers:  fewer than p neighboring points within a distance d  The top n whose distance to the kth nearest neighbor is greatest  The top n whose average distance to the k nearest neighbors is greatest

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Strengths and Weaknesses l Simple l O(m^2) time [m data points] l Sensitive to the choice of parameters l Cannot handle different densities

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Outliers in Lower Dimensional Projection l In high-dimensional space, data is sparse and notion of proximity becomes meaningless –Every point is an almost equally good outlier from the perspective of proximity-based definitions l Lower-dimensional projection methods –A point is an outlier if in some lower dimensional projection, it is present in a local region of abnormally low density

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Outliers in Lower Dimensional Projection l Divide each attribute into  equal-depth intervals –Each interval contains a fraction f = 1/  of the records l Consider a k-dimensional cube created by picking grid ranges from k different dimensions –If attributes are independent, we expect region to contain a fraction f k of the records –If there are N points, we can measure sparsity of a cube D as: –Negative sparsity indicates cube contains smaller number of points than expected

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Example l N=100,  = 5, f = 1/5 = 0.2, N  f 2 = 4

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Anomaly/Outlier Detection l Proximity-based (10.3) l Density-based (10.4) l Clustering-based (10.5) l Statistical (10.1)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Handling different densities l Example from the original paper p 2  p 1 

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Density-based: LOF approach

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Density-based: LOF approach p 2  p 1  In the NN approach, p 2 is not considered as outlier, while LOF approach find both p 1 and p 2 as outliers

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Strengths and weaknesses l Handles different densities l O(m^2) l Selecting k –Vary k, use the max outlier score

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Anomaly/Outlier Detection l Proximity-based (10.3) l Density-based (10.4) l Clustering-based (10.5) l Statistical (10.1)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Clustering-based l How would you use clustering algorithms for anomaly detection?

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Distant small clusters l Cluster the data into groups l Choose points in small cluster as candidate outliers l Compute the distance between candidate points and non-candidate clusters.  If candidate points are far from all other non-candidate points, they are outliers

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Distant Small Clusters l Parameters: –How small is small? –How far is far? l Sensitive to number of clusters (k) –When k is large, more “small” clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Strength of cluster membership l Outlier if not strongly belong to any cluster –Prototype-based clustering  Distance from centroid –Density-based clustering  Low density (noise points in DBSCAN)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Prototype based clustering l Strength of cluster membership –Distance to centroid –What if clusters have different densities

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Prototype based clustering l Strength of cluster membership –Distance to centroid –What if clusters have different densities  Ideas?

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Prototype based clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Prototype based clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Prototype based clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Impact of Outliers to Initial Clustering l We cluster first –But outliers affect clustering l Simple approach –Cluster first –Remove outliers –Cluster again

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Impact of Outliers to Initial Clustering l More sophisticated approach –Special group of potential outliers  don’t fit well in any cluster –While clustering  Add to group if an object doesn’t fit well  Remove from group if an object fits well

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Number of clusters to use l No simple answer –Try different numbers –Try more clusters  Clusters are smaller –More cohesive –Detected outliers are more likely to be real outliers –However, outliers can form small clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Strengths and weaknesses l Definition of clustering is complementary to outliers l Sensitive to number of clusters l Sensitive to the clustering algorithm

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Anomaly/Outlier Detection l Proximity-based (10.3) l Density-based (10.4) l Clustering-based (10.5) l Statistical (10.1)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Statistical Approaches l Univariate (10.2.1) l Multivariate (10.2.2) l Mixture Model (10.2.3)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Univariate Distribution l Assume a parametric model describing the distribution of the data (e.g., normal distribution) l Apply a statistical test that depends on –Data distribution –Parameter of distribution (e.g., mean, variance) –Number of expected outliers (confidence limit)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Grubbs’ Test l Detect outliers in univariate data l Assume data comes from normal distribution l Detects one outlier at a time, remove the outlier, and repeat –H 0 : There is no outlier in data –H A : There is at least one outlier l Grubbs’ test statistic: l Reject H 0 if:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Statistical Approaches l Univariate (10.2.1) l Multivariate (10.2.2) l Mixture Model (10.2.3)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Multivariate distribution

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Note the x and y axes don’t have the same scale

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Mahalanobis Distance

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Statistical Approaches l Univariate (10.2.1) l Multivariate (10.2.2) l Mixture Model (10.2.3)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Mixture Model Approach l Assume the data set D contains samples from a mixture of two probability distributions: –M (majority distribution) –A (anomalous distribution)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Mixture Model Approach l Data distribution, D = (1 – ) M + A l M is a probability distribution estimated from data –Can be based on any modeling method (naïve Bayes, maximum entropy, etc) l A is initially assumed to be uniform distribution l Likelihood at time t:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Mixture Model Approach l General Algorithm: –Initially, assume all the data points belong to M –Let LL t (D) be the log likelihood of D at time t –For each point x t that belongs to M, move it to A  Let LL t+1 (D) be the new log likelihood.  Compute the difference,  = LL t (D) – LL t+1 (D)  If  > c (some threshold), then x t is declared as an anomaly and moved permanently from M to A

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Strengths and weaknesses l Grounded in statistics l Most of the tests are for a single attribute (univariate) l In many cases, data distribution may not be known l For high dimensional data, it may be difficult to estimate the true distribution

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Evaluation of Anomaly Detection l Accuracy is not appropriate for anomaly detection –why?

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Evaluation of Anomaly Detection

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Evaluation l Accuracy is not appropriate for anomaly detection –Easy to get high accuracy--just predict normal l Ideas?

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Evaluation l Accuracy is not appropriate for anomaly detection –Easy to get high accuracy--just predict normal l We discussed these in Classification –Cost matrix for different errors –Precision and Recall (F measure) –True-positive and false-positive rates  Receiver Operating Characteristic (ROC) curve –Area under the curve (AUC)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Base Rate Fallacy (Axelsson, 1999) l Diagnosing Disease –Test is 99% accurate  True-positive is 99%  True-negative is 99% l Bad news –Test is positive l Good news –1 in 10,000 has the disease l Are you more likely to have the disease or not?

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Base Rate Fallacy l Bayes theorem: l More generally:

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Total Probability

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Base Rate Fallacy l S=Disease; P=Test l Even though the test is 99% certain, your chance of having the disease is 1/100, because the population of healthy people is much larger than sick people

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Base Rate Fallacy in Intrusion Detection l I: intrusive behavior,  I: non-intrusive behavior A: alarm  A: no alarm l Detection rate (true positive rate): P(A|I) l False alarm rate (false positive rate): P(A|  I) l Goal is to maximize both –“Bayesian detection rate,” P(I|A) [precision] –P(  I|  A)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Detection Rate vs False Alarm Rate l Suppose: l Then: l False alarm rate becomes more dominant if P(I) is very low

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Detection Rate vs False Alarm Rate l Axelsson: We need a very low false alarm rate to achieve a reasonable Bayesian detection rate