LEARNING Chapter 18b and Chuck Dyer

Slides:



Advertisements
Similar presentations
Hypothesis testing Another judgment method of sampling data.
Advertisements

Sampling: Final and Initial Sample Size Determination
Chapter 10: Sampling and Sampling Distributions
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Spring 2004.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA A.
Machine Learning Week 2 Lecture 2.
Probably Approximately Correct Learning Yongsub Lim Applied Algorithm Laboratory KAIST.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2005.
Cooperating Intelligent Systems
Computational Learning Theory PAC IID VC Dimension SVM Kunstmatige Intelligentie / RuG KI2 - 5 Marius Bulacu & prof. dr. Lambert Schomaker.
18 LEARNING FROM OBSERVATIONS
Hypothesis Testing Steps of a Statistical Significance Test. 1. Assumptions Type of data, form of population, method of sampling, sample size.
Evaluating Hypotheses Chapter 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics.
Northwestern University Winter 2007 Machine Learning EECS Machine Learning Lecture 13: Computational Learning Theory.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2004.
Evaluating Hypotheses Chapter 9 Homework: 1-9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics ~
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18.
CS 4700: Foundations of Artificial Intelligence
Computational Learning Theory
Hypothesis Testing :The Difference between two population mean :
Statistical hypothesis testing – Inferential statistics I.
Chapter Ten Introduction to Hypothesis Testing. Copyright © Houghton Mifflin Company. All rights reserved.Chapter New Statistical Notation The.
Overview of Statistical Hypothesis Testing: The z-Test
Hypothesis Testing – Introduction
HYPOTHESIS TEST 假设检验. Instruction In this chapter you will learn how to formulate hypotheses about a population mean and a population proportion. Through.
1 © Lecture note 3 Hypothesis Testing MAKE HYPOTHESIS ©
Hypothesis Testing.
8 - 1 © 2003 Pearson Prentice Hall Chi-Square (  2 ) Test of Variance.
Chapter 8 Introduction to Hypothesis Testing
1 Power and Sample Size in Testing One Mean. 2 Type I & Type II Error Type I Error: reject the null hypothesis when it is true. The probability of a Type.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Chapter 9 Hypothesis Testing II: two samples Test of significance for sample means (large samples) The difference between “statistical significance” and.
Inferential Statistics 2 Maarten Buis January 11, 2006.
Instructor Resource Chapter 5 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 5.
Wednesday, October 17 Sampling distribution of the mean. Hypothesis testing using the normal Z-distribution.
1 Chapter 8 Introduction to Hypothesis Testing. 2 Name of the game… Hypothesis testing Statistical method that uses sample data to evaluate a hypothesis.
Computational Learning Theory IntroductionIntroduction The PAC Learning FrameworkThe PAC Learning Framework Finite Hypothesis SpacesFinite Hypothesis Spaces.
Monday, October 22 Hypothesis testing using the normal Z-distribution. Student’s t distribution. Confidence intervals.
_ z = X -  XX - Wow! We can use the z-distribution to test a hypothesis.
Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University Today: Computational Learning Theory Probably Approximately.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
Monday, October 21 Hypothesis testing using the normal Z-distribution. Student’s t distribution. Confidence intervals.
Carla P. Gomes CS4700 Computational Learning Theory Slides by Carla P. Gomes and Nathalie Japkowicz (Reading: R&N AIMA 3 rd ed., Chapter 18.5)
Chapter 13 Understanding research results: statistical inference.
Hypothesis Testing Steps for the Rejection Region Method State H 1 and State H 0 State the Test Statistic and its sampling distribution (normal or t) Determine.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
PEP-PMMA Training Session Statistical inference Lima, Peru Abdelkrim Araar / Jean-Yves Duclos 9-10 June 2007.
CS-424 Gregory Dudek Lecture 14 Learning –Inductive inference –Probably approximately correct learning.
© 2010 Pearson Prentice Hall. All rights reserved Chapter Hypothesis Tests Regarding a Parameter 10.
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Four ANALYSIS AND PRESENTATION OF DATA.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
Part Four ANALYSIS AND PRESENTATION OF DATA
Hypothesis Testing I The One-sample Case
CS 9633 Machine Learning Inductive-Analytical Methods
Computational Learning Theory
Web-Mining Agents Computational Learning Theory
Evaluating Classifiers
Hypothesis Testing – Introduction
Statistical Inference: One- Sample Confidence Interval
Chapter 9 Hypothesis Testing
Statistical inference
INF 5860 Machine learning for image classification
Monday, October 19 Hypothesis testing using the normal Z-distribution.
Computational Learning Theory
Computational Learning Theory
Hypothesis Tests for a Standard Deviation
Sample Mean Compared to a Given Population Mean
Lecture 14 Learning Inductive inference
Statistical inference
Presentation transcript:

LEARNING Chapter 18b and Chuck Dyer Some material adopted from notes by Andreas Geyer-Schulz and Chuck Dyer

Hypothesis Testing and False Examples

Version Space with All Hypothesis Consistent with Examples

The Extensions of Members

Computational learning theory Intersection of AI, statistics, and computational theory Probably approximately correct (PAC) learning: Seriously wrong hypotheses can be found out almost certainly (with high probability) using a small number of examples Any hypothesis that is consistent with a significantly large set of training examples is unlikely to be seriously wrong : it is probably approximately correct. How many examples are needed? Sample complexity (# of examples to “guarantee” correctness) grows with the size of the model space Stationarity assumption: Training set and test sets are drawn from the same distribution

Approximately correct: Notations: X: set of all possible examples D: distribution from which examples are drawn H: set of all possible hypotheses N: the number of examples in the training set f: the true function to be learned Approximately correct: error(h) = P(h(x) ≠ f(x)| x drawn from D) Hypothesis h is approximately correct if error(h) ≤ ε Approximately correct hypotheses lie inside the ε -ball around f

Probably Approximately correct hypothesis h: If the probability of error(h) ≤ ε is greater than or equal to a given threshold 1 - δ A loose upper bound on the number of examples needed to guarantee PAC: Theoretical results apply to fairly simple learning models (e.g., decision list learning)