true error less Any learner that outputs a hypothesis consistent with all training examples.

Slides:



Advertisements
Similar presentations
The Important Thing About By. The Important Thing About ******** The important thing about ***** is *****. It is true s/he can *****, *****, and *****.
Advertisements

Computational Learning Theory
Machine Learning and Data Mining Linear regression
CHAPTER 2: Supervised Learning. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Learning a Class from Examples.
On-line learning and Boosting
FilterBoost: Regression and Classification on Large Datasets Joseph K. Bradley 1 and Robert E. Schapire 2 1 Carnegie Mellon University 2 Princeton University.
PAC Learning adapted from Tom M.Mitchell Carnegie Mellon University.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Artificial Neural Networks ML Paul Scheible.
Hypothesis Testing Steps of a Statistical Significance Test. 1. Assumptions Type of data, form of population, method of sampling, sample size.
2D1431 Machine Learning Boosting.
Northwestern University Winter 2007 Machine Learning EECS Machine Learning Lecture 13: Computational Learning Theory.
Measuring Model Complexity (Textbook, Sections ) CS 410/510 Thurs. April 27, 2007 Given two hypotheses (models) that correctly classify the training.
Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line.
Vapnik-Chervonenkis Dimension Part II: Lower and Upper bounds.
1 Computational Learning Theory and Kernel Methods Tianyi Jiang March 8, 2004.
Bayesian Learning Rong Jin.
How to Debug Debugging Detectives Debugging Desperados I GIVE UP! MyClass.java.
Computational Learning Theory
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Machine Learning Chapter 3. Decision Tree Learning
Evaluating Hypotheses Reading: Coursepack: Learning From Examples, Section 4 (pp )
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
Introduction to Machine Learning Supervised Learning 姓名 : 李政軒.
1 Machine Learning: Lecture 8 Computational Learning Theory (Based on Chapter 7 of Mitchell T.., Machine Learning, 1997)
Overview Concept Learning Representation Inductive Learning Hypothesis
CS 8751 ML & KDDDecision Trees1 Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting.
Computational Learning Theory IntroductionIntroduction The PAC Learning FrameworkThe PAC Learning Framework Finite Hypothesis SpacesFinite Hypothesis Spaces.
Concept learning, Regression Adapted from slides from Alpaydin’s book and slides by Professor Doina Precup, Mcgill University.
Machine Learning Chapter 5. Evaluating Hypotheses
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
CS Inductive Bias1 Inductive Bias: How to generalize on novel data.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Chapter5: Evaluating Hypothesis. 개요 개요 Evaluating the accuracy of hypotheses is fundamental to ML. - to decide whether to use this hypothesis - integral.
CS 188: Artificial Intelligence Spring 2006 Lecture 12: Learning Theory 2/23/2006 Dan Klein – UC Berkeley Many slides from either Stuart Russell or Andrew.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Ensemble Methods: Bagging, Boosting Readings: Murphy 16.4; Hastie 16.
Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent.
Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University Today: Computational Learning Theory Probably Approximately.
Decision Trees IDHairHeightWeightLotionResult SarahBlondeAverageLightNoSunburn DanaBlondeTallAverageYesnone AlexBrownTallAverageYesNone AnnieBlondeShortAverageNoSunburn.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
MACHINE LEARNING 3. Supervised Learning. Learning a Class from Examples Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Concept learning Maria Simi, 2011/2012 Machine Learning, Tom Mitchell Mc Graw-Hill International Editions, 1997 (Cap 1, 2).
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
1 Machine Learning Lecture 8: Ensemble Methods Moshe Koppel Slides adapted from Raymond J. Mooney and others.
Ch 2. The Probably Approximately Correct Model and the VC Theorem 2.3 The Computational Nature of Language Learning and Evolution, Partha Niyogi, 2004.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
CS 9633 Machine Learning Inductive-Analytical Methods
Computational Learning Theory
CH. 2: Supervised Learning
Ordering of Hypothesis Space
A “Holy Grail” of Machine Learing
INTRODUCTION TO Machine Learning
ريكاوري (بازگشت به حالت اوليه)
Machine Learning Chapter 3. Decision Tree Learning
Computational Learning Theory
Evaluating Hypotheses
Computational Learning Theory
Machine Learning Chapter 3. Decision Tree Learning
Overfitting and Underfitting
IES 511 Machine Learning Dr. Türker İnce (Lecture notes by Prof. T. M
Inference Concepts Hypothesis Testing.
Recitation 10 Oznur Tastan
Machine Learning: UNIT-3 CHAPTER-2
Lecture 14 Learning Inductive inference
INTRODUCTION TO Machine Learning
Sanguthevar Rajasekaran University of Connecticut
Types of Errors And Error Analysis.
Presentation transcript:

true error less

Any learner that outputs a hypothesis consistent with all training examples

true errortraining errordegree of overfitting

What if H is not finite? Can’t use our result for finite H Need some other measure of complexity for H –Vapnik-Chervonenkis dimension!

1.Initialize VS to all hypotheses in H 2.For each training example, remove from VS all hyps. that misclassify this example

Weighted Majority Algorithm when  =0, equivalent to the Halving algorithm…

Even algorithms that learn or change over time…