Today Evaluation Measures Accuracy Significance Testing

Slides:



Advertisements
Similar presentations
Statistics Review – Part II Topics: – Hypothesis Testing – Paired Tests – Tests of variability 1.
Advertisements

Learning Algorithm Evaluation
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Sampling: Final and Initial Sample Size Determination
Model generalization Test error Bias, variance and complexity
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Lecture 22: Evaluation April 24, 2010.
What is Statistical Modeling
Evaluation (practice). 2 Predicting performance  Assume the estimated error rate is 25%. How close is this to the true error rate?  Depends on the amount.
Evaluation.
Assessing and Comparing Classification Algorithms Introduction Resampling and Cross Validation Measuring Error Interval Estimation and Hypothesis Testing.
Model Evaluation Metrics for Performance Evaluation
Cost-Sensitive Classifier Evaluation Robert Holte Computing Science Dept. University of Alberta Co-author Chris Drummond IIT, National Research Council,
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
Supervised classification performance (prediction) assessment Dr. Huiru Zheng Dr. Franscisco Azuaje School of Computing and Mathematics Faculty of Engineering.
© sebastian thrun, CMU, The KDD Lab Intro: Outcome Analysis Sebastian Thrun Carnegie Mellon University
9-1 Hypothesis Testing Statistical Hypotheses Statistical hypothesis testing and confidence interval estimation of parameters are the fundamental.
Evaluation.
Tuesday, October 22 Interval estimation. Independent samples t-test for the difference between two means. Matched samples t-test.
T-test.
Chap 9-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 9 Estimation: Additional Topics Statistics for Business and Economics.
Performance Evaluation in Computer Vision Kyungnam Kim Computer Vision Lab, University of Maryland, College Park.
Inference about a Mean Part II
Business Statistics: A Decision-Making Approach, 7e © 2008 Prentice-Hall, Inc. Chap 10-1 Business Statistics: A Decision-Making Approach 7 th Edition Chapter.
Experimental Evaluation
Chapter 9 Hypothesis Testing.
Statistical Comparison of Two Learning Algorithms Presented by: Payam Refaeilzadeh.
Chapter 14 Inferential Data Analysis
Standard error of estimate & Confidence interval.
CSCI 347 / CS 4206: Data Mining Module 06: Evaluation Topic 07: Cost-Sensitive Measures.
EVALUATION David Kauchak CS 451 – Fall Admin Assignment 3 - change constructor to take zero parameters - instead, in the train method, call getFeatureIndices()
Evaluating Classifiers
AM Recitation 2/10/11.
Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 5 of Data Mining by I. H. Witten, E. Frank and M. A. Hall 報告人:黃子齊
Education 793 Class Notes T-tests 29 October 2003.
Evaluation – next steps
Error estimation Data Mining II Year Lluís Belanche Alfredo Vellido.
Chapter 9 Hypothesis Testing and Estimation for Two Population Parameters.
9-1 Hypothesis Testing Statistical Hypotheses Definition Statistical hypothesis testing and confidence interval estimation of parameters are.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
One-sample In the previous cases we had one sample and were comparing its mean to a hypothesized population mean However in many situations we will use.
Experimental Evaluation of Learning Algorithms Part 1.
Classification Performance Evaluation. How do you know that you have a good classifier? Is a feature contributing to overall performance? Is classifier.
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
CpSc 810: Machine Learning Evaluation of Classifier.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005 Dr. John Lipp Copyright © Dr. John Lipp.
Evaluating Results of Learning Blaž Zupan
Model Evaluation l Metrics for Performance Evaluation –How to evaluate the performance of a model? l Methods for Performance Evaluation –How to obtain.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 5: Credibility: Evaluating What’s Been Learned.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 5: Credibility: Evaluating What’s Been Learned.
Correlation & Regression Analysis
Machine Learning Tutorial-2. Recall, Precision, F-measure, Accuracy Ch. 5.
Chapter 5: Credibility. Introduction Performance on the training set is not a good indicator of performance on an independent set. We need to predict.
Machine Learning in Practice Lecture 10 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
10.1 – Estimating with Confidence. Recall: The Law of Large Numbers says the sample mean from a large SRS will be close to the unknown population mean.
Lecture 8 Estimation and Hypothesis Testing for Two Population Parameters.
Chapter 5: Credibility. Introduction Performance on the training set is not a good indicator of performance on an independent set. We need to predict.
Machine Learning in Practice Lecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Evaluating Classifiers. Reading for this topic: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website)
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 5: Credibility: Evaluating What’s Been Learned.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Machine Learning in Practice Lecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
7. Performance Measurement
Chapter 9 Hypothesis Testing.
Evaluating Classifiers
Evaluating Results of Learning
CSE 4705 Artificial Intelligence
Product moment correlation
Machine Learning: Lecture 5
Presentation transcript:

Machine Learning Queens College Lecture 8: Evaluation Machine Learning Queens College

Today Evaluation Measures Accuracy Significance Testing Cross-validation F-Measure Error Types ROC Curves Equal Error Rate AIC/BIC

How do you know that you have a good classifier? Is a feature contributing to overall performance? Is classifier A better than classifier B? Internal Evaluation: Measure the performance of the classifier. External Evaluation: Measure the performance on a downstream task

Accuracy Easily the most common and intuitive measure of classification performance.

Significance testing Say I have two classifiers. A = 50% accuracy B = 75% accuracy B is better, right?

Significance Testing Say I have another two classifiers A = 50% accuracy B = 50.5% accuracy Is B better?

Basic Evaluation Training data – used to identify model parameters Testing data – used for evaluation Optionally: Development / tuning data – used to identify model hyperparameters. Difficult to get significance or confidence values

Cross validation Identify n “folds” of the available data. Train on n-1 folds Test on the remaining fold. In the extreme (n=N) this is known as “leave-one-out” cross validation n-fold cross validation (xval) gives n samples of the performance of the classifier.

Cross-validation visualized Available Labeled Data Identify n partitions Train Train Train Train Dev Test Fold 1

Cross-validation visualized Available Labeled Data Identify n partitions Test Train Train Train Train Dev Fold 2

Cross-validation visualized Available Labeled Data Identify n partitions Dev Test Train Fold 3

Cross-validation visualized Available Labeled Data Identify n partitions Train Dev Test Train Fold 4

Cross-validation visualized Available Labeled Data Identify n partitions Train Train Dev Test Train Fold 5

Cross-validation visualized Available Labeled Data Identify n partitions Train Train Train Dev Test Train Fold 6 Calculate Average Performance

Some criticisms of cross-validation While the test data is independently sampled, there is a lot of overlap in training data. The model performance may be correlated. Underestimation of variance. Overestimation of significant differences. One proposed solution is to repeat 2-fold cross-validation 5 times rather than 10-fold cross validation

Significance Testing Is the performance of two classifiers different with statistical significance? Means testing If we have two samples of classifier performance (accuracy), we want to determine if they are drawn from the same distribution (no difference) or two different distributions.

T-test One Sample t-test Independent t-test Unequal variances and sample sizes Once you have a t-value, look up the significance level on a table, keyed on the t-value and degrees of freedom

Significance Testing Run Cross-validation to get n-samples of the classifier mean. Use this distribution to compare against either: A known (published) level of performance one sample t-test Another distribution of performance two sample t-test If at all possible, results should include information about the variance of classifier performance.

Significance Testing Caveat – including more samples of the classifier performance can artificially inflate the significance measure. If x and s are constant (the sample represents the population mean and variance) then raising n will increase t. If these samples are real, then this is fine. Often cross-validation fold assignment is not truly random. Thus subsequent xval runs only resample the same information.

Confidence Bars Variance information can be included in plots of classifier performance to ease visualization. Plot standard deviation, standard error or confidence interval?

Confidence Bars Most important to be clear about what is plotted. 95% confidence interval has the clearest interpretation.

Baseline Classifiers Majority Class baseline Random baseline Every data point is classified as the class that is most frequently represented in the training data Random baseline Randomly assign one of the classes to each data point. with an even distribution with the training class distribution

Problems with accuracy Contingency Table True Values Positive Negative Hyp Values True Positive False Positive False Negative True Negative

Problems with accuracy Information Retrieval Example Find the 10 documents related to a query in a set of 100 documents True Values Positive Negative Hyp Values 10 90

Problems with accuracy Precision: how many hypothesized events were true events Recall: how many of the true events were identified F-Measure: Harmonic mean of precision and recall True Values Positive Negative Hyp Values 10 90

F-Measure F-measure can be weighted to favor Precision or Recall beta > 1 favors recall beta < 1 favors precision True Values Positive Negative Hyp Values 10 90

F-Measure True Values Positive Negative Hyp Values 1 9 90

F-Measure True Values Positive Negative Hyp Values 10 50 40

F-Measure True Values Positive Negative Hyp Values 9 1 89

F-Measure Accuracy is weighted towards majority class performance. F-measure is useful for measuring the performance on minority classes.

Types of Errors False Positives False Negatives The system predicted TRUE but the value was FALSE aka “False Alarms” or Type I error False Negatives The system predicted FALSE but the value was TRUE aka “Misses” or Type II error

ROC curves It is common to plot classifier performance at a variety of settings or thresholds Receiver Operating Characteristic (ROC) curves plot true positives against false positives. The overall performance is calculated by the Area Under the Curve (AUC)

ROC Curves Equal Error Rate (EER) is commonly reported. EER represents the highest accuracy of the classifier Curves provide more detail about performance Gauvain et al. 1995

Goodness of Fit Another view of model performance. Measure the model likelihood of the unseen data. However, we’ve seen that model likelihood is likely to improve by adding parameters. Two information criteria measures include a cost term for the number of parameters in the model

Akaike Information Criterion Akaike Information Criterion (AIC) based on entropy The best model has the lowest AIC. Greatest model likelihood Fewest free parameters Information in the parameters Information lost by the modeling

Bayesian Information Criterion Another penalization term based on Bayesian arguments Select the model that is a posteriori most probably with a constant penalty term for wrong models If errors are normally distributed. Note compares estimated models when x is constant

Next Time Evaluation for Clustering How do you know if you’re doing well