Download presentation
1
Lecture 22: Evaluation April 24, 2010
2
Last Time Spectral Clustering
3
Today Evaluation Measures Accuracy Significance Testing F-Measure
Error Types ROC Curves Equal Error Rate AIC/BIC
4
How do you know that you have a good classifier?
Is a feature contributing to overall performance? Is classifier A better than classifier B? Internal Evaluation: Measure the performance of the classifier. External Evaluation: Measure the performance on a downstream task
5
Accuracy Easily the most common and intuitive measure of classification performance.
6
Significance testing Say I have two classifiers. A = 50% accuracy
B = 75% accuracy B is better, right?
7
Significance Testing Say I have another two classifiers
A = 50% accuracy B = 50.5% accuracy Is B better?
8
Basic Evaluation Training data – used to identify model parameters
Testing data – used for evaluation Optionally: Development / tuning data – used to identify model hyperparameters. Difficult to get significance or confidence values
9
Cross validation Identify n “folds” of the available data.
Train on n-1 folds Test on the remaining fold. In the extreme (n=N) this is known as “leave-one-out” cross validation n-fold cross validation (xval) gives n samples of the performance of the classifier.
10
Significance Testing Is the performance of two classifiers different with statistical significance? Means testing If we have two samples of classifier performance (accuracy), we want to determine if they are drawn from the same distribution (no difference) or two different distributions.
11
T-test One Sample t-test Independent t-test
Unequal variances and sample sizes Once you have a t-value, look up the significance level on a table, keyed on the t-value and degrees of freedom
12
Significance Testing Run Cross-validation to get n-samples of the classifier mean. Use this distribution to compare against either: A known (published) level of performance one sample t-test Another distribution of performance two sample t-test If at all possible, results should include information about the variance of classifier performance.
13
Significance Testing Caveat – including more samples of the classifier performance can artificially inflate the significance measure. If x and s are constant (the sample represents the population mean and variance) then raising n will increase t. If these samples are real, then this is fine. Often cross-validation fold assignment is not truly random. Thus subsequent xval runs only resample the same information.
14
Confidence Bars Variance information can be included in plots of classifier performance to ease visualization. Plot standard deviation, standard error or confidence interval?
15
Confidence Bars Most important to be clear about what is plotted.
95% confidence interval has the clearest interpretation.
16
Baseline Classifiers Majority Class baseline Random baseline
Every data point is classified as the class that is most frequently represented in the training data Random baseline Randomly assign one of the classes to each data point. with an even distribution with the training class distribution
17
Problems with accuracy
Contingency Table True Values Positive Negative Hyp Values True Positive False Positive False Negative True Negative
18
Problems with accuracy
Information Retrieval Example Find the 10 documents related to a query in a set of 110 documents True Values Positive Negative Hyp Values 10 100
19
Problems with accuracy
Precision: how many hypothesized events were true events Recall: how many of the true events were identified F-Measure: Harmonic mean of precision and recall True Values Positive Negative Hyp Values 10 100
20
F-Measure F-measure can be weighted to favor Precision or Recall
beta > 1 favors recall beta < 1 favors precision True Values Positive Negative Hyp Values 10 100
21
F-Measure True Values Positive Negative Hyp Values 1 9 100
22
F-Measure True Values Positive Negative Hyp Values 10 50
23
F-Measure True Values Positive Negative Hyp Values 9 1 99
24
F-Measure Accuracy is weighted towards majority class performance.
F-measure is useful for measuring the performance on minority classes.
25
Types of Errors False Positives False Negatives
The system predicted TRUE but the value was FALSE aka “False Alarms” or Type I error False Negatives The system predicted FALSE but the value was TRUE aka “Misses” or Type II error
26
ROC curves It is common to plot classifier performance at a variety of settings or thresholds Receiver Operating Characteristic (ROC) curves plot true positives against false positives. The overall performance is calculated by the Area Under the Curve (AUC)
27
ROC Curves Equal Error Rate (EER) is commonly reported.
EER represents the highest accuracy of the classifier Curves provide more detail about performance Gauvain et al. 1995
28
Goodness of Fit Another view of model performance.
Measure the model likelihood of the unseen data. However, we’ve seen that model likelihood is likely to improve by adding parameters. Two information criteria measures include a cost term for the number of parameters in the model
29
Akaike Information Criterion
Akaike Information Criterion (AIC) based on entropy The best model has the lowest AIC. Greatest model likelihood Fewest free parameters Information in the parameters Information lost by the modeling
30
Bayesian Information Criterion
Another penalization term based on Bayesian arguments Select the model that is a posteriori most probably with a constant penalty term for wrong models If errors are normally distributed. Note compares estimated models when x is constant
31
Today Accuracy Significance Testing F-Measure AIC/BIC
32
Next Time Regression Evaluation Cluster Evaluation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.