Probably Approximately Correct Learning Yongsub Lim Applied Algorithm Laboratory KAIST.

Slides:



Advertisements
Similar presentations
Computational Learning Theory
Advertisements

PAC Learning 8/5/2005. purpose Effort to understand negative selection algorithm from totally different aspects –Statistics –Machine learning What is.
THE WELL ORDERING PROPERTY Definition: Let B be a set of integers. An integer m is called a least element of B if m is an element of B, and for every x.
Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein.
PAC Learning adapted from Tom M.Mitchell Carnegie Mellon University.
1 Polynomial Time Probabilistic Learning of a Subclass of Linear Languages with Queries Yasuhiro TAJIMA, Yoshiyuki KOTANI Tokyo Univ. of Agri. & Tech.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA A.
Example set X Can Inductive Learning Work? Hypothesis space H Training set  Inductive hypothesis h size.
Machine Learning Week 2 Lecture 2.
Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
Northwestern University Winter 2007 Machine Learning EECS Machine Learning Lecture 13: Computational Learning Theory.
Computational Learning Theory
Probably Approximately Correct Model (PAC)
Vapnik-Chervonenkis Dimension
Statistics Lecture 20. Last Day…completed 5.1 Today Parts of Section 5.3 and 5.4.
Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line.
1 How to be a Bayesian without believing Yoav Freund Joint work with Rob Schapire and Yishay Mansour.
Vapnik-Chervonenkis Dimension Part II: Lower and Upper bounds.
Graph-Cut Algorithm with Application to Computer Vision Presented by Yongsub Lim Applied Algorithm Laboratory.
Experts and Boosting Algorithms. Experts: Motivation Given a set of experts –No prior information –No consistent behavior –Goal: Predict as the best expert.
PAC learning Invented by L.Valiant in 1984 L.G.ValiantA theory of the learnable, Communications of the ACM, 1984, vol 27, 11, pp
A vector space containing infinitely many vectors can be efficiently described by listing a set of vectors that SPAN the space. eg: describe the solutions.
ECE 8443 – Pattern Recognition Objectives: Error Bounds Complexity Theory PAC Learning PAC Bound Margin Classifiers Resources: D.M.: Simplified PAC-Bayes.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 36-37: Foundation of Machine Learning.
Submodular Functions Learnability, Structure & Optimization Nick Harvey, UBC CS Maria-Florina Balcan, Georgia Tech.
Random Numbers and Simulation  Generating truly random numbers is not possible Programs have been developed to generate pseudo-random numbers Programs.
3. Counting Permutations Combinations Pigeonhole principle Elements of Probability Recurrence Relations.
Potential-Based Agnostic Boosting Varun Kanade Harvard University (joint work with Adam Tauman Kalai (Microsoft NE))
1 Machine Learning: Lecture 8 Computational Learning Theory (Based on Chapter 7 of Mitchell T.., Machine Learning, 1997)
T. Poggio, R. Rifkin, S. Mukherjee, P. Niyogi: General Conditions for Predictivity in Learning Theory Michael Pfeiffer
On the power and the limits of evolvability Vitaly Feldman Almaden Research Center.
Statistics Workshop Tutorial 5 Sampling Distribution The Central Limit Theorem.
Section 6-5 The Central Limit Theorem. THE CENTRAL LIMIT THEOREM Given: 1.The random variable x has a distribution (which may or may not be normal) with.
Computational Learning Theory IntroductionIntroduction The PAC Learning FrameworkThe PAC Learning Framework Finite Hypothesis SpacesFinite Hypothesis Spaces.
Machine Learning Chapter 5. Evaluating Hypotheses
Chapter5: Evaluating Hypothesis. 개요 개요 Evaluating the accuracy of hypotheses is fundamental to ML. - to decide whether to use this hypothesis - integral.
1 CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman Machine Learning: The Theory of Learning R&N 18.5.
Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent.
Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University Today: Computational Learning Theory Probably Approximately.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
5.3 Algorithmic Stability Bounds Summarized by: Sang Kyun Lee.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
SAT Prep. Postulates and Theorems Relating Points, Lines, and Planes Use postulates and theorems relating points, lines, and planes.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
CS-424 Gregory Dudek Lecture 14 Learning –Inductive inference –Probably approximately correct learning.
Ch 2. The Probably Approximately Correct Model and the VC Theorem 2.3 The Computational Nature of Language Learning and Evolution, Partha Niyogi, 2004.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
Computational Learning Theory
Computational Learning Theory
From Randomness to Probability
Spectral Clustering.
ASV Chapters 1 - Sample Spaces and Probabilities
The sum of any two even integers is even.
Computational Learning Theory
Evaluating Hypotheses
LEARNING Chapter 18b and Chuck Dyer
Computational Learning Theory
P-VALUE.
Lecture 43 Section 10.1 Wed, Apr 6, 2005
Computational Learning Theory Eric Xing Lecture 5, August 13, 2010
Exam 2 - Review Chapters
Section 11.7 Probability.
CSCI B609: “Foundations of Data Science”
Chapter 8 Limit and Continuity of Functions
TexPoint fonts used in EMF.
Machine Learning: UNIT-3 CHAPTER-2
Evaluating Hypothesis
Lecture 14 Learning Inductive inference
Presentation transcript:

Probably Approximately Correct Learning Yongsub Lim Applied Algorithm Laboratory KAIST

Definition A class is PAC learnable by a hypothesis class if  there is an algorithm such that  over , # of i.i.d. training examples sampled from, such that where is an output of Probably Approximately Correct Learning2

Example Consider the class space which is the set of all positive half-lines An example is any real number Eg) Probably Approximately Correct Learning 1 0 3

Proof. is PAC learnable is PAC learnable by Probably Approximately Correct Learning4

Proof. is PAC learnable Our algorithm outputs a hypothesis such that Suppose  for a positive example  for a negative example Probably Approximately Correct Learning5

Proof. is PAC learnable Suppose, and it called  only occurs if no training example     Probably Approximately Correct Learning6

Proof. is PAC learnable Probably Approximately Correct Learning7

Proof. is PAC learnable Probably Approximately Correct Learning8

Proof. is PAC learnable The class is PAC learnable by itself with at least training examples Probably Approximately Correct Learning9

A More General Theorem Probably Approximately Correct Learning10

Thanks Probably Approximately Correct Learning 11