Download presentation
Presentation is loading. Please wait.
Published byJulianna Barnett Modified over 9 years ago
1
ROC 1.Medical decision making 2.Machine learning 3.Data mining research communities A technique for visualizing, organizing, selecting classifiers based on their performance
2
ROC Confusion matrix
3
benefits costs ROC space Any classifier on the diagonal may be said to have on information about the class
4
ROC curve A discrete classifier decision trees rule sets Y or N Produces a single point a Naive Bayes classifier a neural network probability score Each threshold value produces a different point Vary a threshold from −∞ to +∞ and tracing a ROC curve
5
ROC curve
6
Threshold= + ∞
7
ROC curve ROC curves have an attractive property: they are insensitive to changes in class distribution.
8
ROC curve
10
AUC Definition: Area under an ROC Curve The AUC has an important statistical property 1.It is equivalent to the Wilcoxon test of ranks 2.It is also closely related to the Gini coefficient Gini + 1 = 2 × AUC
11
Averaging ROC curves The error bars
12
Decision problems with more than two classes Multi-class ROC graphs Multi-class AUC
13
Iso-performance line ability: 1. class skew 2. error costs This equation defines the slope of an iso-performance line. Conclusion : Lines “more northwest” (having a larger TP-intercept) are better because they correspond to classifiers with lower expected cost.
14
Combining classifiers
15
Conditional combinations of classifiers to remove concavities 1.idiosyncracies in learning 2.small test set effects
16
Conditional combinations of classifiers to remove concavities
17
Logically combining classifiers 2. c4= c1 ∨ c2 1. c3 = c1 ∧ c2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.