Experimental Evaluation

Slides:



Advertisements
Similar presentations
Tests of Hypotheses Based on a Single Sample
Advertisements

“Students” t-test.
1 COMM 301: Empirical Research in Communication Lecture 15 – Hypothesis Testing Kwan M Lee.
Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Evaluation (practice). 2 Predicting performance  Assume the estimated error rate is 25%. How close is this to the true error rate?  Depends on the amount.
Evaluation.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
Point estimation, interval estimation
Chapter 7 Sampling and Sampling Distributions
9-1 Hypothesis Testing Statistical Hypotheses Statistical hypothesis testing and confidence interval estimation of parameters are the fundamental.
Evaluation.
Topic 2: Statistical Concepts and Market Returns
Evaluating Hypotheses
Inferences About Process Quality
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Chapter 9 Hypothesis Testing.
Statistical Comparison of Two Learning Algorithms Presented by: Payam Refaeilzadeh.
5-3 Inference on the Means of Two Populations, Variances Unknown
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Tests of Hypotheses Based on a Single Sample.
1 CSI5388 Data Sets: Running Proper Comparative Studies with Large Data Repositories [Based on Salzberg, S.L., 1997 “On Comparing Classifiers: Pitfalls.
1 Machine Learning: Lecture 5 Experimental Evaluation of Learning Algorithms (Based on Chapter 5 of Mitchell T.., Machine Learning, 1997)
AM Recitation 2/10/11.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 9 Hypothesis Testing.
Overview Definition Hypothesis
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
STAT 5372: Experimental Statistics Wayne Woodward Office: Office: 143 Heroy Phone: Phone: (214) URL: URL: faculty.smu.edu/waynew.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses.
+ Chapter 9 Summary. + Section 9.1 Significance Tests: The Basics After this section, you should be able to… STATE correct hypotheses for a significance.
F OUNDATIONS OF S TATISTICAL I NFERENCE. D EFINITIONS Statistical inference is the process of reaching conclusions about characteristics of an entire.
1 Machine Learning: Experimental Evaluation. 2 Motivation Evaluating the performance of learning systems is important because: –Learning systems are usually.
Topics: Statistics & Experimental Design The Human Visual System Color Science Light Sources: Radiometry/Photometry Geometric Optics Tone-transfer Function.
Random Sampling, Point Estimation and Maximum Likelihood.
9-1 Hypothesis Testing Statistical Hypotheses Definition Statistical hypothesis testing and confidence interval estimation of parameters are.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 8 Hypothesis Testing.
Interval Estimation and Hypothesis Testing Prepared by Vera Tabakova, East Carolina University.
Physics 270 – Experimental Physics. Let say we are given a functional relationship between several measured variables Q(x, y, …) x ±  x and x ±  y What.
CpSc 881: Machine Learning Evaluating Hypotheses.
机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. 课程基本信息  主讲教师:陈昱 Tel :  助教:程再兴, Tel :  课程网页:
Machine Learning Chapter 5. Evaluating Hypotheses
1 CSI5388 Current Approaches to Evaluation (Based on Chapter 5 of Mitchell T.., Machine Learning, 1997)
Chapter5: Evaluating Hypothesis. 개요 개요 Evaluating the accuracy of hypotheses is fundamental to ML. - to decide whether to use this hypothesis - integral.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 5: Credibility: Evaluating What’s Been Learned.
© Copyright McGraw-Hill 2004
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
Comparing Systems Using Sample Data Andy Wang CIS Computer Systems Performance Analysis.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Empirical Evaluation (Ch 5) how accurate is a hypothesis/model/dec.tree? given 2 hypotheses, which is better? accuracy on training set is biased – error:
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses pt.1.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate its.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate accuracy.
Sampling and Sampling Distributions
Virtual University of Pakistan
Evaluating Hypotheses
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Chapter 9 Hypothesis Testing.
Chapter 9 Hypothesis Testing.
Discrete Event Simulation - 4
Chapter Nine Part 1 (Sections 9.1 & 9.2) Hypothesis Testing
Evaluating Hypotheses
Lecture 7 Sampling and Sampling Distributions
Evaluating Hypothesis
Machine Learning: Lecture 5
Presentation transcript:

Experimental Evaluation In experimental Machine Learning we evaluate the accuracy of a hypothesis empirically. This raises a few important methodological questions: Experimental Evaluation CS446-Spring06

Experimental Evaluation In experimental Machine Learning we evaluate the accuracy of a hypothesis empirically. This raises a few important methodological questions: Given the observed accuracy of the hypothesis over a limited sample of data, how well does it estimate its accuracy over additional examples ? Experimental Evaluation CS446-Spring06

Experimental Evaluation In experimental Machine Learning we evaluate the accuracy of a hypothesis empirically. This raises a few important methodological questions: Given the observed accuracy of the hypothesis over a limited sample of data, how well does it estimate its accuracy over additional examples ? Given that one hypothesis outperforms another over some sample of data, how probable is it that it is more accurate in general ? Experimental Evaluation CS446-Spring06

Experimental Evaluation In experimental Machine Learning we evaluate the accuracy of a hypothesis empirically. This raises a few important methodological questions: Given the observed accuracy of the hypothesis over a limited sample of data, how well does it estimate its accuracy over additional examples ? Given that one hypothesis outperforms another over some sample of data, how probable is it that it is more accurate in general ? When data is limited, what is the best way to use this data to both learn the hypothesis and estimate its accuracy. Experimental Evaluation CS446-Spring06

Experimental Evaluation In experimental Machine Learning we evaluate the accuracy of a hypothesis empirically. This raises a few important methodological questions: Given the observed accuracy of the hypothesis over a limited sample of data, how well does it estimate its accuracy over additional examples ? Estimating Hypothesis Accuracy Given that one hypothesis outperforms another over some sample of data, how probable is it that it is more accurate in general ? Comparing Classifiers/Learning Algorithms When data is limited, what is the best way to use this data to both learn the hypothesis and estimate its accuracy. Statistical Problems: Parameter Estimation and Hypothesis Testing Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy Given a hypothesis h and data sample containing n examples drawn at random according to some distribution D what is the best estimate of the accuracy of h over future instances drawn from the same distribution ? Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy Given a hypothesis h and data sample containing n examples drawn at random according to some distribution D what is the best estimate of the accuracy of h over future instances drawn from the same distribution ? PAC: Given sample drawn according to D Want to make sure that we will be okay for new sample from D with confidence  we will be -accurate Here: We observe some accuracy and want to know if its typical Note the difference from the (worst case) PAC learning question. Here we are interested in a statistical estimation problem. Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy Given a hypothesis h and data sample containing n examples drawn at random according to some distribution D what is the best estimate of the accuracy of h over future instances drawn from the same distribution ? Note the difference from the (worst case) PAC learning question. Here we are interested in a statistical estimation problem. The problem is to estimate the proportion of a population that exhibits some property, given the observed proportion over some random sample of the population. Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The property we are interested in is (for some fixed function f) The True Error: Since we cannot observe it, we are performing an experiment: we collect a random sample S of n independently drawn instances from the distribution D, and use it to measure The Sample Error: Naturally, each time we run an experiment (i.e., collect a sample of n test examples) we expect to get a different Sample Error. Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The distribution of the number of mistakes is Binomial( p) with p= The mean is and the standard variation is Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The Sample Error is distributed like r/n, when r is Binomial(p) The mean is and the standard variation is Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The Sample Error is distributed like r/n, when r is Binomial(p) The mean is and the standard variation is , But, due to the central limit theorem, if n is large enough (30…) we can assume that the distribution of the Sample Error is Normal with mean and standard variation: Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The distribution of the Sample Error: Consequently, one can give a range on the error of a hypothesis such that with high probability the true error will be within this range. Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The distribution of the Sample Error: Consequently, one can give a range on the error of a hypothesis such that with high probability the true error will be within this range. Given the observed error (your estimate of the true error), you know with some confidence that the true error is within some range around it. Experimental Evaluation CS446-Spring06

Some Numbers Assume you test an hypothesis h and find that it commits r = 12 errors on a sample of n=40 examples. The estimation for the true error will be: p=r/n= 0.3 What is the variance of this error ? (n is fixed, r - random variable, distributed Binomial(0.3).) Therefore (# of mistakes) = 40. 0.3 (1-0.3) = 2.89 And (sample error) = 2.89/40=0.07 r = 300 errors on a sample of n=1000 examples. The estimation for the true error will be: p=r/n= 0.3 And (sample error) =  0.3 (1-0.3)/1000=14.5/1000=0.014 Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The distribution of the Sample Error: 95% of the samples are within ±2 of the mean Consequently, one can give a range on the error of a hypothesis such that with high probability the true error will be within this range. With confidence N%: Confidence N% 50% 68% 80% 90% 95% 98% 99% Constant R 0.67 1.00 1.28 1.64 1.96 2.33 2.58 Experimental Evaluation CS446-Spring06

Estimating Hypothesis Accuracy The distribution of the Sample Error: 95% of the samples are within ±2 of the mean Consequently, one can give a range on the error of a hypothesis such that with high probability the true error will be within this range. With confidence N%: Confidence N% 50% 68% 80% 90% 95% 98% 99% Constant R 0.67 1.00 1.28 1.64 1.96 2.33 2.58 Experimental Evaluation CS446-Spring06

Comparing Two Hypotheses When comparing two hypotheses, the ordering of their sample accuracies may or may nor accurately reflect the ordering of their true accuracies. Experimental Evaluation CS446-Spring06

Comparing Two Hypotheses When comparing two hypotheses, the ordering of their sample accuracies may or may nor accurately reflect the ordering of their true accuracies. Interpretation: assume we test on and on and measure and respectively. These graphs indicate the probability distribution of the sample error. We can see that it is possible that the true error of is lower than that of and vice versa. Experimental Evaluation CS446-Spring06

Comparing Two Hypotheses We wish to estimate the difference between the true error of these hypotheses. The difference of two normally distributed variables is also normally distributed Experimental Evaluation CS446-Spring06

Comparing Two Hypotheses We wish to estimate the difference between the true error of these hypotheses. The difference of two normally distributed variables is also normally distributed Notice that the density function is a convolution of the original two Experimental Evaluation CS446-Spring06

Confidence in Difference The probability that is the probability that d > 0 which is given by the shaded area. Experimental Evaluation CS446-Spring06

Confidence in Difference Since the normal distribution is symmetric, we can also assert confidence intervals with lower bounds and upper bounds Experimental Evaluation CS446-Spring06

Standard Deviation of the Difference The variance of the difference is the sum of the variances The mean is the observed difference d Therefore, the N% confidence interval in d is What is the probability that ? This is the confidence that d is in the one-sided interval d >0 We find the highest value N such that , that is and conclude that with probability (100-(100-N)/2)% Experimental Evaluation CS446-Spring06

Standard Deviation of the Difference What is the probability that ? This is the confidence that d is in the one-sided interval d >0 We find the highest value N such that , that is and conclude that with probability (100-(100-N)/2)% Confidence N% 50% 68% 80% 90% 95% 98% 99% Constant Rn 0.67 1.00 1.28 1.64 1.96 2.33 2.58 Experimental Evaluation CS446-Spring06

Hypothesis Testing A statistical hypothesis is a statement about a set of parameters of a distribution. We are looking for procedures that determine whether the hypothesis is correct or not. In this case we can say that we accept the hypothesis that with N% confidence. Equivalently, we can say that we reject that hypothesis that the difference is due to random chance, at a (100-N)/100 level of significance. By convention, in normal scientific practice, a confidence of 95% is high enough to assert that there is a “significant difference”. Experimental Evaluation CS446-Spring06

A Hypothesis Test Assume that based on two different samples of 100 test instances we observe that: We can say that we accept the hypothesis that “h1 is better than h2” with 95% confidence, or that the difference is significant at the .05 level. Experimental Evaluation CS446-Spring06

A Hypothesis Test Assume that based on two different samples of 100 test instances we observe that: We conclude: “h1 is better than h2” with 75% confidence. Cannot conclude that that difference is significant (since p> .05). Experimental Evaluation CS446-Spring06

Comparing Learning Algorithms Given two algorithms A and B we would like to know which of the methods is the better method, on average, for learning a particular function f Statistical tests must control several sources of variation: - variation in selecting test data - variation in selecting training data - random decisions of the algorithms Algorithms A might do better than B when trained on a particular randomly selected set, or when tested on a particular randomly selected test set, even though on the whole population they perform identically. Experimental Evaluation CS446-Spring06

Comparing Learning Algorithms An ideal statistical test should derive conclusions based on estimating: where by L(S) we denote the output hypothesis of the algorithm when being trained on S, and the expectation is over all possible samples from the underlying distribution, taken independently. In practice we usually have a sample D’ from D to work with. The average is therefore done on different splits of this sample to training/test sets. We want methods that - identify a difference between algorithms when it exists - do not find a difference when it does not exists Experimental Evaluation CS446-Spring06

Methodology Assume a hypothesis (null hypothesis) E.g. the algorithms are equivalent Choose a statistics A figure that you can compute from the data and can estimate given that the hypothesis holds. - what value do we expect? (assuming the hypothesis holds) - what value do we get? (experimentally) What is the probability distribution of the statistics? What’s the deviation of the empirical figure from the expected one? Decide: Is this due to chance? - Yes/ No/ with what confidence? Experimental Evaluation CS446-Spring06

Distributions Normal Distribution Chi Square Consider the random variables: The random variables defined by: is (with n degrees of freedom) Experimental Evaluation CS446-Spring06

Distributions Student’s t distribution: Let W be N(0,1), V be and assume the W,V are independent Then, the distribution of the random variable is call a t-distribution (with n degrees of freedom) Tn is symmetric about zero. As n becomes larger, it becomes more and more like N(0,1). E(Tn) = 0 Var(Tn) = n/(n-2) Experimental Evaluation CS446-Spring06

t-Distributions Student’s t distribution: Let W be N(0,1), V be and assume the W,V are independent Then, the distribution of the random variable is call a t-distribution (with n degrees of freedom) Tn is symmetric about zero. As n becomes larger, it becomes more and more like N(0,1). E(Tn) = 0 Var(Tn) = n/(n-2) Experimental Evaluation CS446-Spring06

t Distributions Originally used when one can obtain an estimate for the mean but not for  We want a distribution that allows us to compute a confidence in the mean  without knowing  , but only an estimate s for it (based on the same sample that produced the mean). The quantity t is given by: That is, t is the deviation of the sample mean from the population mean, measured in units of the means standard error This is good for small samples, and the tables depend on n Experimental Evaluation CS446-Spring06

K-Fold Cross Validation Partition the data D’ into k disjoint subsets of equal size. For i from 1 to k do: Use for test and the rest for training. Set Return the average difference in error: Experimental Evaluation CS446-Spring06

K-Fold Cross Validation Comments 10 is a standard number of folds. When k=|D|, the methods is called leave-one-out. Every example gets used as a test example exactly once and as a training example k-1 times. Test sets are independent but training set overlap significantly. The hypotheses are generated using (k-1)/k of the training data. Experimental Evaluation CS446-Spring06

K-Fold Cross Validation Comments 10 is a standard number of folds. When k=|D|, the methods is called leave-one-out. Every examples gets used as a test example exactly once and as a training example k-1 times. Test sets are independent but training set overlap significantly. The hypotheses are generated using (k-1)/k of the training data. Before we compared hypotheses using independent test sets. Here, the hypotheses generated by algorithms A and B are tested on the same test set (paired tests). Experimental Evaluation CS446-Spring06

Paired t Tests Paired tests produce tighter bounds since any difference is due to difference in the hypotheses rather than differences in the test set. Significance Testing of the Paired Tests: Compute the statistics: where is the measured difference between A and B on the ith data set and is their average. The statistics is distributed according to a t-distribution(k) When k paired test are performed With k=30, to get N=95%, we need |t| < 2.04 Experimental Evaluation CS446-Spring06

Paired t Tests Paired t tests can be used in many ways. Sample the data 30 times. Split each sample to Train and Test. Run A and B on Train and Test. Let be the difference in error. Estimate the same statistics. (Most common in Machine Leaning; has problems) 10-Fold Cross validation: The ith experiment is done on Better; has problems due to training set overlap Experimental Evaluation CS446-Spring06

5x2 Cross Validation Perform 5 replication of 2-fold cross validation. In each replication, the available data is randomly partitioned into and of equal size. Train algorithms A and B on each set and test on the other. Error measures: Let be the variances computed for each of the 5 replications. Then, Has a t-distribution, with k=5 Experimental Evaluation CS446-Spring06

McNemar’s Test An alternative to Cross Validation, when the test can be run only once. Divide the sample S into a training set R and test set T. Train algorithms A and B on R, yielding classifiers A, B Record how each example in T is classified and compute the number of: examples misclassified by both examples misclassified by A and B A but not B B but not A neither A nor B where N is the total number of examples in the test set T Experimental Evaluation CS446-Spring06

McNemar’s Test The hypothesis: the two learning algorithms have the same error rate on a randomly drawn sample. That is, we expect that The statistics we use to measure deviation from the expected counts: This statistics is distributed (approximately) as with 1 degree of freedom. (with a continuity correction since the statistics is discrete) Example: Since we reject the hypothesis with 95% confidence if the above ratio is greater the 3.841 Experimental Evaluation CS446-Spring06

Experimental Evaluation - Final Comments Good experimental methodology, including statistical analysis is important in empirically comparing learning algorithms. Methods have their shortcomings; this is an active area of research. Tom Dietterich, Approximate statistical tests for comparing supervised classification learning algorithms (Neural Computation) Artificial data is useful for testing certain hypotheses about specific strength and weaknesses of algorithms but only real data can test the hypothesis that the bias of the learner is useful for the actual problem. There are a few benchmarks for comparing learning algorithms. The UC Irvine repository is the one most commonly used http://www/ics.uci.edu/mlearn/MLRepository.html Experimental Evaluation CS446-Spring06