Download presentation
Presentation is loading. Please wait.
Published byAbel Foster Modified over 9 years ago
1
Quiz 1 review
2
Evaluating Classifiers Reading: T. Fawcett paper, link on class website, Sections 1-4 Optional reading: Davis and Goadrich paper, link on class website
3
“Confusion matrix” for a given class c ActualPredicted (or “classified”) True False (in class c)(not in class c) True (in class c) TruePositiveFalseNegative False (not in class c)FalsePositiveTrueNegative Evaluating classification algorithms
4
Accuracy: Fraction of correct answers out of all problems Precision: Fraction of true positives out of all predicted positives: Recall: Fraction of true positives out of all actual positives:
5
Trading off precision against recall...... w1w1 w2w2 w 64 o w0w0 +1 x1x1 x2x2 x 64 How can we improve precision (at the expense of recall) with a fixed classifier?
6
True False (“8”)(“0”) True (“8”) 4010 False (“0”)30120 True False (“8”)(“0”) True (“8”) ?? False (“0”)?? Old, with threshold of 0 New, with threshold of: -∞ Example 1: Assume 200 sample digits, of which 50 have class “8” Precision? Recall? Actual Predicted
7
True False (“8”)(“0”) True (“8”) 4010 False (“0”)30120 True False (“8”)(“0”) True (“8”) ?? False (“0”)?? Old, with threshold of 0 New, with threshold of: +∞ Precision? Recall? Actual Predicted Example 2: Assume 200 sample digits, of which 50 have class “8”
8
Results of classifier ThresholdAccuracyPrecisionRecall.9.8.7.6.5.4.3.2.1 -∞ Creating a Precision/Recall Curve
9
9 (“sensitivity”) (1 “specificity”)
10
10
11
Results of classifier ThresholdAccuracyTPRFPR.9.8.7.6.5.4.3.2.1 -∞ Creating a ROC Curve
12
Precision/Recall versus ROC curves 12 http://blog.crowdflower.com/2008/06/aggregate-turker-judgments-threshold-calibration/
13
13
14
14
15
15
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.