Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.

Similar presentations


Presentation on theme: "© Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer."— Presentation transcript:

1 © Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer Science University of Minnesota http://www.cs.umn.edu/~kumar

2 © Vipin Kumar CSci 8980 Fall 2002 2 Model Evaluation l Metrics for Performance Evaluation –How to evaluate the performance of a model? l Methods for Performance Evaluation –How to obtain reliable estimates l Methods for Model Comparison –How to compare the relative performance among competing models

3 © Vipin Kumar CSci 8980 Fall 2002 3 Metrics for Performance Evaluation l Focus on the predictive capability of a model –Rather than how fast it takes to classify or build models, scalability, etc. l Confusion Matrix: PREDICTED CLASS ACTUAL CLASS Class=YesClass=No Class=Yesab Class=Nocd a: TP (true positive) b: FN (false negative) c: FP (false positive) d: TN (true negative)

4 © Vipin Kumar CSci 8980 Fall 2002 4 Metrics for Performance Evaluation… l Most widely-used metric: PREDICTED CLASS ACTUAL CLASS Class=YesClass=No Class=Yesa (TP) B (FN) Class=Noc (FP) d (TN)

5 © Vipin Kumar CSci 8980 Fall 2002 5 Cost Matrix PREDICTED CLASS ACTUAL CLASS C(i|j) Class=YesClass=No Class=YesC(Yes|Yes)C(No|Yes) Class=NoC(Yes|No)C(No|No) C(i|j): Cost of misclassifying class j example as class i l Accuracy is a useful measure if l C(Yes|No)=C(No|Yes) and C(Yes|Yes)=C(No|No) l P(Yes) = P(No) (class distribution are equal)

6 © Vipin Kumar CSci 8980 Fall 2002 6 Cost vs Accuracy Cost Matrix PREDICTED CLASS ACTUAL CLASS C(i|j) +- +100 -10 Model M 1 PREDICTED CLASS ACTUAL CLASS C(i|j) +- +15040 -60250 Model M 2 PREDICTED CLASS ACTUAL CLASS C(i|j) +- +25045 -5200 Accuracy = 80% Cost = 3910 Accuracy = 90% Cost = 4255

7 © Vipin Kumar CSci 8980 Fall 2002 7 Cost-Sensitive Measures l Precision is biased towards C(Yes|Yes) & C(Yes|No) l Recall is biased towards C(Yes|Yes) & C(No|Yes) l F-measure is biased towards all except C(No|No)

8 © Vipin Kumar CSci 8980 Fall 2002 8 Methods for Performance Evaluation l How to obtain a reliable estimate of performance? l Performance of a model may depend on other factors besides the learning algorithm: –Class distribution –Cost of misclassification –Size of training and test sets

9 © Vipin Kumar CSci 8980 Fall 2002 9 Learning Curve l Learning curve shows how accuracy changes with varying sample size l Requires a sampling schedule for creating learning curve: l Arithmetic sampling (Langley, et al) l Geometric sampling (Provost et al) Effect of small sample size: - Bias in the estimate - Variance of estimate

10 © Vipin Kumar CSci 8980 Fall 2002 10 Methods of Estimation l Holdout –Reserve 2/3 for training and 1/3 for testing l Random subsampling –Repeated holdout l Cross validation –Partition data into k disjoint subsets –k-fold: train on k-1 partitions, test on the remaining one –Leave-one-out: k=n l Stratified sampling –oversampling vs undersampling l Bootstrap –Sampling with replacement

11 © Vipin Kumar CSci 8980 Fall 2002 11 ROC (Receiver Operating Characteristic) l Developed in 1950s for signal detection theory to analyze noisy signals –Characterize the trade-off between positive hits and false alarms l ROC curve plots TP (on the y-axis) against FP (on the x-axis) l Performance of each classifier represented as a point on the ROC curve –changing the threshold of algorithm, sample distribution or cost matrix changes the location of the point

12 © Vipin Kumar CSci 8980 Fall 2002 12 ROC Curve - 1-dimensional data set containing 2 classes (positive and negative) - any points located at x > t is classified as positive At threshold t: TP=0.5, FN=0.5, FP=0.12, FN=0.88

13 © Vipin Kumar CSci 8980 Fall 2002 13 ROC Curve (TP,FP): l (0,0): declare everything to be negative class l (1,1): declare everything to be positive class l (1,0): ideal l Diagonal line: –Random guessing –Below diagonal line:  prediction is opposite of the true class

14 © Vipin Kumar CSci 8980 Fall 2002 14 Using ROC for Model Comparison l No model consistently outperform the other l M 1 is better for small FPR l M 2 is better for large FPR l Area Under the ROC curve l Ideal:  Area = 1 l Random guess:  Area = 0.5

15 © Vipin Kumar CSci 8980 Fall 2002 15 How to Construct an ROC curve InstanceP(+|A)True Class 10.95+ 20.93+ 30.87- 40.85- 5 - 6 + 70.76- 80.53+ 90.43- 100.25+ Use classifier that produces posterior probability for each test instance P(+|A) Sort the instances according to P(+|A) in decreasing order Apply threshold at each unique value of P(+|A) Count the number of TP, FP, TN, FN at each threshold TP rate, TPR = TP/(TP+FN) FP rate, FPR = FP/(FP + TN)

16 © Vipin Kumar CSci 8980 Fall 2002 16 How to construct an ROC curve Threshold >= ROC Curve:

17 © Vipin Kumar CSci 8980 Fall 2002 17 Test of Significance l Given two models: –Model M1: accuracy = 85%, tested on 30 instances –Model M2: accuracy = 75%, tested on 5000 instances l Can we say M1 is better than M2? –How much confidence can we place on accuracy of M1 and M2? –Can the difference in performance measure be explained as a result of random fluctuations in the test set?

18 © Vipin Kumar CSci 8980 Fall 2002 18 Confidence Interval for Accuracy l Prediction can be regarded as a Bernoulli trial –A Bernoulli trial has 2 possible outcomes –Possible outcomes for prediction: correct or wrong –Collection of Bernoulli trials has a Binomial distribution:  x  Bin(N, p) x: number of correct predictions  e.g: Toss a fair coin 50 times, how many heads would turn up? Expected number of heads = N  p = 50  0.5 = 25 l Given x (# of correct predictions) or equivalently, acc=x/N, and N (# of test instances), Can we predict p (true accuracy of model)?

19 © Vipin Kumar CSci 8980 Fall 2002 19 Confidence Interval for Accuracy l For large test sets (N > 30), –acc has a normal distribution with mean p and variance p(1-p)/N l Confidence Interval for p: Area = 1 -  Z  /2 Z 1-  /2

20 © Vipin Kumar CSci 8980 Fall 2002 20 Confidence Interval for Accuracy l Consider a model that produces an accuracy of 80% when evaluated on 100 test instances: –N=100, acc = 0.8 –Let 1-  = 0.95 (95% confidence) –From probability table, Z  /2 =1.96 1-  Z 0.992.58 0.982.33 0.951.96 0.901.65 N5010050010005000 p(lower)0.6700.7110.7630.7740.789 p(upper)0.8880.8660.8330.8240.811

21 © Vipin Kumar CSci 8980 Fall 2002 21 Comparing Performance of 2 Models l Given two models, say M1 and M2, which is better? –M1 is tested on D1 (size=n1), found error rate = e1 –M2 is tested on D2 (size=n2), found error rate = e2 –Assume D1 and D2 are independent –If n1 and n2 are sufficiently large, then –Approximate:

22 © Vipin Kumar CSci 8980 Fall 2002 22 Comparing Performance of 2 Models l To test if performance difference is statistically significant: d = e1 – e2 N –d ~ N(d t,  t ) where d t is the true difference –Since D1 and D2 are independent, their variance adds up: –At (1-  ) confidence level,

23 © Vipin Kumar CSci 8980 Fall 2002 23 An Illustrative Example l Given: M1: n1 = 30, e1 = 0.15 M2: n2 = 5000, e2 = 0.25 l d = |e2 – e1| = 0.1 (2-sided test) l At 95% confidence level, Z  /2 =1.96 => Interval contains 0 => difference may not be statistically significant

24 © Vipin Kumar CSci 8980 Fall 2002 24 Comparing Performance of 2 Algorithms l Each learning algorithm may produce k models: –L1 may produce M11, M12, …, M1k –L2 may produce M21, M22, …, M2k l If models are generated on the same test sets D1,D2, …, Dk (e.g., via cross-validation) –For each set: compute d j = e 1j – e 2j –d j has mean d t and variance  t –Estimate:

25 © Vipin Kumar CSci 8980 Fall 2002 25 What is Cluster Analysis? l Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups. –Based on information found in the data that describes the objects and their relationships. –Also known as unsupervised classification. l Many applications –Understanding: group related documents for browsing or to find genes and proteins that have similar functionality. –Summarization: Reduce the size of large data sets.

26 © Vipin Kumar CSci 8980 Fall 2002 26 What is not Cluster Analysis? l Supervised classification. –Have class label information. l Simple segmentation. –Dividing students into different registration groups alphabetically, by last name. l Results of a query. –Groupings are a result of an external specification. l Graph partitioning –Some mutual relevance and synergy, but areas are not identical.

27 © Vipin Kumar CSci 8980 Fall 2002 27 Notion of a Cluster is Ambiguous Initial points. Four ClustersTwo Clusters Six Clusters

28 © Vipin Kumar CSci 8980 Fall 2002 28 Types of Clusterings l A clustering is a set of clusters. l One important distinction is between hierarchical and partitional sets of clusters. l Partitional Clustering –A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset. l Hierarchical clustering –A set of nested clusters organized as a hierarchical tree.

29 © Vipin Kumar CSci 8980 Fall 2002 29 Partitional Clustering Original PointsA Partitional Clustering

30 © Vipin Kumar CSci 8980 Fall 2002 30 Hierarchical Clustering Traditional Hierarchical Clustering Non-traditional Hierarchical ClusteringNon-traditional Dendrogram Traditional Dendrogram

31 © Vipin Kumar CSci 8980 Fall 2002 31 Other Distinctions Between Sets of Clusters l Exclusive versus non-exclusive –In non-exclusive clusterings, points may belong to multiple clusters. –Can represent multiple classes or ‘border’ points l Fuzzy versus non-fuzzy –In fuzzy clusterings, a point belongs to every cluster with some weight between 0 and 1. –Weights must sum to 1. –Probabilistic clustering has similar characteristics. l Partial versus complete. –In some cases, we only want to cluster some of the data.

32 © Vipin Kumar CSci 8980 Fall 2002 32 Types of Clusters: Well-Separated l Well-Separated Clusters: –A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster.

33 © Vipin Kumar CSci 8980 Fall 2002 33 Types of Clusters: Center-Based l Center-based – A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster. –The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster.

34 © Vipin Kumar CSci 8980 Fall 2002 34 Types of Clusters: Contiguity-Based l 3)Contiguous Cluster(Nearest neighbor or Transitive) –A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.

35 © Vipin Kumar CSci 8980 Fall 2002 35 Types of Clusters: Density-Based l Density-based –A cluster is a dense region of points, which is separated by low-density regions, from other regions of high density. –Used when the clusters are irregular or intertwined, and when noise and outliers are present. –The three curves don’t form clusters since they fade into the noise, as does the bridge between the two small circular clusters.

36 © Vipin Kumar CSci 8980 Fall 2002 36 Similarity and Dissimilarity l Similarity –Numerical measure of how alike two data objects are. –Is higher when objects are more alike. –Often falls in the range [0,1] l Dissimilarity –Numerical measure of how different two data objects are. –Is lower when objects are more alike. –Minimum dissimilarity is often 0. –Upper limit varies l Proximity refers to a similarity or dissimilarity

37 © Vipin Kumar CSci 8980 Fall 2002 37 Summary of Similarity/Dissimilarity for Simple Attributes p and q are the attribute values for two data objects.

38 © Vipin Kumar CSci 8980 Fall 2002 38 Euclidean Distance l Euclidean Distance Where n is the number of dimensions (attributes) and p k and q k are, respectively, the k th attributes (components) or data objects p and q. l Standardization is necessary, if scales differ.

39 © Vipin Kumar CSci 8980 Fall 2002 39 Euclidean Distance Distance Matrix

40 © Vipin Kumar CSci 8980 Fall 2002 40 Minkowski Distance l Minkowski Distance is a generalization of Euclidean Distance Where r is a parameter, n is the number of dimensions (attributes) and p k and q k are, respectively, the kth attributes (components) or data objects p and q.

41 © Vipin Kumar CSci 8980 Fall 2002 41 Minkowski Distance: Examples l r = 1. City block (Manhattan, taxicab, L 1 norm) distance. –A common example of this is the Hamming distance, which is just the number of bits that are different between two binary vectors. l r = 2. Euclidean distance. l r  . “supremum” (L max norm, L  norm) distance. –This is the maximum difference between any component of the vectors. l Do not confuse r with n, i.e., all these distances are defined for all numbers of dimensions.

42 © Vipin Kumar CSci 8980 Fall 2002 42 Minkowski Distance Distance Matrix

43 © Vipin Kumar CSci 8980 Fall 2002 43 Common Properties of a Distance l Distances, such as the Euclidean distance, have some well known properties. 1.d(p, q)  0 for all p and q and d(p, q) = 0 only if p = q. (Positive definiteness) 2.d(p, q) = d(q, p) for all p and q. (Symmetry) 3.d(p, r)  d(p, q) + d(q, r) for all points p, q, and r. (Triangle Inequality) where d(p, q) is the distance (dissimilarity) between points (data objects), p and q. l A distance that satisfies these properties is a metric

44 © Vipin Kumar CSci 8980 Fall 2002 44 Common Properties of a Similarity l Similarities, also have some well known properties. 1.s(p, q) = 1 (or maximum similarity) only if p = q. 2.s(p, q) = s(q, p) for all p and q. (Symmetry) where s(p, q) is the similarity between points (data objects), p and q.

45 © Vipin Kumar CSci 8980 Fall 2002 45 Similarity Between Binary Vectors l Common situation is that objects, p and q, have only binary attributes. l Compute similarities using the following quantities M 01 = the number of attributes where p was 0 and q was 1 M 10 = the number of attributes where p was 1 and q was 0 M 00 = the number of attributes where p was 0 and q was 0 M 11 = the number of attributes where p was 1 and q was 1 l Simple Matching and Jaccard Coefficients SMC = number of matches / number of attributes = (M 11 + M 00 ) / (M 01 + M 10 + M 11 + M 00 ) J = number of 11 matches / number of not-both-zero attributes values = (M 11 ) / (M 01 + M 10 + M 11 )

46 © Vipin Kumar CSci 8980 Fall 2002 46 SMC versus Jaccard: Example p = 1 0 0 0 0 0 0 0 0 0 q = 0 0 0 0 0 0 1 0 0 1 M 01 = 2 (the number of attributes where p was 0 and q was 1) M 10 = 1 (the number of attributes where p was 1 and q was 0) M 00 = 7 (the number of attributes where p was 0 and q was 0) M 11 = 0 (the number of attributes where p was 1 and q was 1) SMC = (M 11 + M 00 )/(M 01 + M 10 + M 11 + M 00 ) = (0+7) / (2+1+0+7) = 0.7 J = (M 11 ) / (M 01 + M 10 + M 11 ) = 0 / (2 + 1 + 0) = 0

47 © Vipin Kumar CSci 8980 Fall 2002 47 Cosine Similarity l If d 1 and d 2 are two document vectors, then cos( d 1, d 2 ) = (d 1  d 2 ) / ||d 1 || ||d 2 ||, where  indicates vector dot product and || d || is the length of vector d. l Example: d 1 = 3 2 0 5 0 0 0 2 0 0 d 2 = 1 0 0 0 0 0 0 1 0 2 d 1  d 2 = 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5 ||d 1 || = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0) 0.5 = (42) 0.5 = 6.481 ||d 2 || = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.245 cos( d 1, d 2 ) =.3150

48 © Vipin Kumar CSci 8980 Fall 2002 48 Extended Jaccard Coefficient (Tanimoto) l Variation of Jaccard for continuous or count attributes –Reduces to Jaccard for binary attributes

49 © Vipin Kumar CSci 8980 Fall 2002 49 Correlation l Correlation measure the linear relationship between objects. l To compute correlation, we standardize data objects, p and q, and then take the dot product.

50 © Vipin Kumar CSci 8980 Fall 2002 50 Visually Evaluating Correlation Scatter plots showing the similarity from –1 to 1.

51 © Vipin Kumar CSci 8980 Fall 2002 51 Mahalanobis Distance For red points, the Euclidean distance is 14.7, Mahalanobis distance is 6.

52 © Vipin Kumar CSci 8980 Fall 2002 52 A General Approach for Combining Similarities l Sometimes attributes are of many different types, but an overall similarity is needed.

53 © Vipin Kumar CSci 8980 Fall 2002 53 Using Weights to Combine Similarities l May not want to treat all attributes the same. –Use weights w k which are between 0 and 1 and sum to 1.


Download ppt "© Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer."

Similar presentations


Ads by Google