Analytical Learning Discussion (4 of 4):

Slides:



Advertisements
Similar presentations
Concept Learning and the General-to-Specific Ordering
Advertisements

2. Concept Learning 2.1 Introduction
1 Machine Learning: Lecture 3 Decision Tree Learning (Based on Chapter 3 of Mitchell T.., Machine Learning, 1997)
CS 484 – Artificial Intelligence1 Announcements Project 1 is due Tuesday, October 16 Send me the name of your konane bot Midterm is Thursday, October 18.
Università di Milano-Bicocca Laurea Magistrale in Informatica
Adapted by Doug Downey from: Bryan Pardo, EECS 349 Fall 2007 Machine Learning Lecture 2: Concept Learning and Version Spaces 1.
Chapter 2 - Concept learning
Machine Learning: Symbol-Based
MACHINE LEARNING. What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
Concept Learning and Version Spaces
Lecture 32 of 42 Machine Learning: Basic Concepts
Artificial Intelligence 6. Machine Learning, Version Space Method
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 19, 2001.
Computing & Information Sciences Kansas State University Lecture 01 of 42 Wednesday, 24 January 2008 William H. Hsu Department of Computing and Information.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Friday, 25 January 2008 William.
Short Introduction to Machine Learning Instructor: Rada Mihalcea.
CS 484 – Artificial Intelligence1 Announcements List of 5 source for research paper Homework 5 due Tuesday, October 30 Book Review due Tuesday, October.
1 Machine Learning What is learning?. 2 Machine Learning What is learning? “That is what learning is. You suddenly understand something you've understood.
Machine Learning Chapter 11.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 19, 2000.
General-to-Specific Ordering. 8/29/03Logic Based Classification2 SkyAirTempHumidityWindWaterForecastEnjoySport SunnyWarmNormalStrongWarmSameYes SunnyWarmHighStrongWarmSameYes.
Computing & Information Sciences Kansas State University Monday, 13 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 34 of 42 Monday, 13 November.
1 Concept Learning By Dong Xu State Key Lab of CAD&CG, ZJU.
机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. 课程基本信息  主讲教师:陈昱 Tel :  助教:程再兴, Tel :  课程网页:
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Chapter 2: Concept Learning and the General-to-Specific Ordering.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
CpSc 810: Machine Learning Concept Learning and General to Specific Ordering.
Concept Learning and the General-to-Specific Ordering 이 종우 자연언어처리연구실.
Outline Inductive bias General-to specific ordering of hypotheses
Overview Concept Learning Representation Inductive Learning Hypothesis
Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 35 of 42 Wednesday, 15 November.
Kansas State University Department of Computing and Information Sciences CIS 798: Intelligent Systems and Machine Learning Thursday, August 26, 1999 William.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 34 of 41 Wednesday, 10.
Machine Learning: Lecture 2
Machine Learning Concept Learning General-to Specific Ordering
Kansas State University Department of Computing and Information Sciences CIS 690: Implementation of High-Performance Data Mining Systems Thursday, 20 May.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
CS464 Introduction to Machine Learning1 Concept Learning Inducing general functions from specific training examples is a main issue of machine learning.
Concept Learning and The General-To Specific Ordering
Computational Learning Theory Part 1: Preliminaries 1.
Concept learning Maria Simi, 2011/2012 Machine Learning, Tom Mitchell Mc Graw-Hill International Editions, 1997 (Cap 1, 2).
Machine Learning Lecture 1: Intro + Decision Trees Moshe Koppel Slides adapted from Tom Mitchell and from Dan Roth.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Chapter 2 Concept Learning
Machine Learning: Symbol-Based
Machine Learning: Symbol-Based
Computational Learning Theory
Concept Learning Machine Learning by T. Mitchell (McGraw-Hill) Chp. 2
CSE543: Machine Learning Lecture 2: August 6, 2014
CS 9633 Machine Learning Concept Learning
Machine Learning Chapter 2
Introduction to Machine Learning Algorithms in Bioinformatics: Part II
Ordering of Hypothesis Space
Computational Learning Theory
Machine Learning: Lecture 3
Introduction to Machine Learning and Knowledge Representation
Computational Learning Theory
Concept Learning.
IES 511 Machine Learning Dr. Türker İnce (Lecture notes by Prof. T. M
Concept Learning Berlin Chen 2005 References:
Machine Learning Chapter 2
Inductive Learning (2/2) Version Space and PAC Learning
Machine Learning: Decision Tree Learning
Implementation of Learning Systems
Version Space Machine Learning Fall 2018.
Machine Learning Chapter 2
Presentation transcript:

Analytical Learning Discussion (4 of 4): Lecture 9 Analytical Learning Discussion (4 of 4): Integrating Inductive and Analytical Learning Monday, February 7, 2000 William H. Hsu Department of Computing and Information Sciences, KSU http://www.cis.ksu.edu/~bhsu Readings: Mitchell, Chapter 2 Russell and Norvig, Chapter 21

Lecture Outline References: Chapters 2-3, Mitchell Suggested Exercises: 2.2, 2.3, 2.4, 2.6 Review: Learning from Examples (Supervised) concept learning framework Basic inductive learning algorithms General-to-Specific Ordering over Hypotheses Version space: partially-ordered set (poset) formalism Candidate elimination algorithm Inductive learning Decision Trees Quick tutorial / review: Lectures 4-5, CIS 798, Fall 1999 See: http://ringil.cis.ksu.edu/Courses/Fall-1999/CIS798/Lectures Relation to Analytical Learning Next Class: Introduction to Artificial Neural Networks

(Supervised) Concept Learning Given: Training Examples <x, f(x)> of Some Unknown Function f Find: A Good Approximation to f Examples (besides Concept Learning) Disease diagnosis x = properties of patient (medical history, symptoms, lab tests) f = disease (or recommended therapy) Risk assessment x = properties of consumer, policyholder (demographics, accident history) f = risk level (expected cost) Automatic steering x = bitmap picture of road surface in front of vehicle f = degrees to turn the steering wheel Part-of-speech tagging Fraud/intrusion detection Web log analysis Multisensor integration and prediction

A Learning Problem x1 Unknown x2 Function y = f (x1, x2, x3, x4 ) x3 xi: ti, y: t, f: (t1  t2  t3  t4)  t Our learning function: Vector (t1  t2  t3  t4  t)  (t1  t2  t3  t4)  t

Hypothesis Space: Unrestricted Case | A  B | = | B | | A | |H4  H | = | {0,1}  {0,1}  {0,1}  {0,1}  {0,1} | = 224 = 65536 function values Complete Ignorance: Is Learning Possible? Need to see every possible input/output pair After 7 examples, still have 29 = 512 possibilities (out of 65536) for f

Training Examples for Concept EnjoySport Specification for Examples Similar to a data type definition 6 attributes: Sky, Temp, Humidity, Wind, Water, Forecast Nominal-valued (symbolic) attributes - enumerative data type Binary (Boolean-Valued or H-Valued) Concept Supervised Learning Problem: Describe the General Concept

Representing Hypotheses Many Possible Representations Hypothesis h: Conjunction of Constraints on Attributes Constraint Values Specific value (e.g., Water = Warm) Don’t care (e.g., “Water = ?”) No value allowed (e.g., “Water = Ø”) Example Hypothesis for EnjoySport Sky AirTemp Humidity Wind Water Forecast <Sunny ? ? Strong ? Same> Is this consistent with the training examples? What are some hypotheses that are consistent with the examples?

Prototypical Concept Learning Tasks Given Instances X: possible days, each described by attributes Sky, AirTemp, Humidity, Wind, Water, Forecast Target function c  EnjoySport: X  H  {{Rainy, Sunny}  {Warm, Cold}  {Normal, High}  {None, Mild, Strong}  {Cool, Warm}  {Same, Change}}  {0, 1} Hypotheses H: conjunctions of literals (e.g., <?, Cold, High, ?, ?, ?>) Training examples D: positive and negative examples of the target function Determine Hypothesis h  H such that h(x) = c(x) for all x  D Such h are consistent with the training data What Is A Concept Learning Algorithm? L: Vector (X  H  boolean)  (X  H) Type of L means: given vector of examples (data set), return hypothesis h h: X  H

Instances, Hypotheses, and the Partial Ordering Less-Specific-Than Instances X Hypotheses H x1 x2 h1 h3 h2 Specific General x1 = <Sunny, Warm, High, Strong, Cool, Same> x2 = <Sunny, Warm, High, Light, Warm, Same> h1 = <Sunny, ?, ?, Strong, ?, ?> h2 = <Sunny, ?, ?, ?, ?, ?> h3 = <Sunny, ?, ?, ?, Cool, ?> h2 P h1 h2 P h3 P  Less-Specific-Than  More-General-Than

Hypothesis Space Search by Find-S Instances X Hypotheses H - + x3 x1 x2 x4 h1 h0 h2,3 h4 h1 = <Ø, Ø, Ø, Ø, Ø, Ø> h2 = <Sunny, Warm, Normal, Strong, Warm, Same> h3 = <Sunny, Warm, ?, Strong, Warm, Same> h4 = <Sunny, Warm, ?, Strong, Warm, Same> h5 = <Sunny, Warm, ?, Strong, ?, ?> x1 = <Sunny, Warm, Normal, Strong, Warm, Same>, + x2 = <Sunny, Warm, High, Strong, Warm, Same>, + x3 = <Rainy, Cold, High, Strong, Warm, Change>, - x4 = <Sunny, Warm, High, Strong, Cool, Change>, + Shortcomings of Find-S Can’t tell whether it has learned concept Can’t tell when training data inconsistent Picks a maximally specific h (why?) Depending on H, there might be several!

Version Spaces Definition: Consistent Hypotheses A hypothesis h is consistent with a set of training examples D of target concept c if and only if h(x) = c(x) for each training example <x, c(x)> in D. Consistent (h, D)   <x, c(x)>  D . h(x) = c(x) Definition: Version Space The version space VSH,D , with respect to hypothesis space H and training examples D, is the subset of hypotheses from H consistent with all training examples in D. VSH,D  { h  H | Consistent (h, D) }

The List-Then-Eliminate Algorithm 1. Initialization: VersionSpace  a list containing every hypothesis in H 2. For each training example <x, c(x)> Remove from VersionSpace any hypothesis h for which h(x)  c(x) 3. Output the list of hypotheses in VersionSpace Example Version Space <Sunny, Warm, ?, Strong, ?, ?> S: <Sunny, ?, ?, Strong, ?, ?> <Sunny, Warm, ?, ?, ?, ?> <?, Warm, ?, Strong, ?, ?> G: <Sunny, ?, ?, ?, ?, ?> <?, Warm, ?, ?, ?, ?>

Representing Version Spaces Hypothesis Space A finite meet semilattice (partial ordering Less-Specific-Than;   all ?) Every pair of hypotheses has a greatest lower bound (GLB) VSH,D  the consistent poset (partially-ordered subset of H) Definition: General Boundary General boundary G of version space VSH,D : set of most general members Most general  minimal elements of VSH,D  “set of necessary conditions” Definition: Specific Boundary Specific boundary S of version space VSH,D : set of most specific members Most specific  maximal elements of VSH,D  “set of sufficient conditions” Version Space Every member of the version space lies between S and G VSH,D  { h  H |  s  S .  g  G . g P h  P s } where P  Less-Specific-Than

Candidate Elimination Algorithm [1] 1. Initialization G  (singleton) set containing most general hypothesis in H, denoted {<?, … , ?>} S  set of most specific hypotheses in H, denoted {<Ø, … , Ø>} 2. For each training example d If d is a positive example (Update-S) Remove from G any hypotheses inconsistent with d For each hypothesis s in S that is not consistent with d Remove s from S Add to S all minimal generalizations h of s such that 1. h is consistent with d 2. Some member of G is more general than h (These are the greatest lower bounds, or meets, s  d, in VSH,D) Remove from S any hypothesis that is more general than another hypothesis in S (remove any dominated elements)

Candidate Elimination Algorithm [2] (continued) If d is a negative example (Update-G) Remove from S any hypotheses inconsistent with d For each hypothesis g in G that is not consistent with d Remove g from G Add to G all minimal specializations h of g such that 1. h is consistent with d 2. Some member of S is more specific than h (These are the least upper bounds, or joins, g  d, in VSH,D) Remove from G any hypothesis that is less general than another hypothesis in G (remove any dominating elements)

Example Trace S0 G0 S1 = G1 S2 = G2 = S3 G3 S4 G4 d1: <Sunny, Warm, Normal, Strong, Warm, Same, Yes> <Ø, Ø, Ø, Ø, Ø, Ø> S0 <?, ?, ?, ?, ?, ?> G0 <Sunny, Warm, Normal, Strong, Warm, Same> S1 = G1 d2: <Sunny, Warm, High, Strong, Warm, Same, Yes> d3: <Rainy, Cold, High, Strong, Warm, Change, No> <Sunny, Warm, ?, Strong, Warm, Same> S2 = G2 d4: <Sunny, Warm, High, Strong, Cool, Change, Yes> = S3 <Sunny, ?, ?, ?, ?, ?> <?, Warm, ?, ?, ?, ?> <?, ?, ?, ?, ?, Same> G3 <Sunny, Warm, ?, Strong, ?, ?> S4 G4 <Sunny, ?, ?, ?, ?, ?> <?, Warm, ?, ?, ?, ?> <Sunny, ?, ?, Strong, ?, ?> <Sunny, Warm, ?, ?, ?, ?> <?, Warm, ?, Strong, ?, ?>

Terminology Supervised Learning Inductive Learning Analytical Learning Concept – function: observations to categories; so far, boolean-valued (+/-) Target (function) – true function f Hypothesis – proposed function h believed to be similar to f Hypothesis space – space of all hypotheses that can be generated by the learning system Example – tuples of the form <x, f(x)> Instance space (aka example space) – space of all possible examples Classifier – discrete-valued function whose range is a set of class labels Inductive Learning Inductive generalization – process of generating hypotheses h H that describe cases not yet observed The inductive learning hypothesis – basis for inductive generalization Analytical Learning Domain theory T – set of assertions to explain examples Analytical generalization - process of generating h consistent with D and T Explanation – proof in terms of T that x satisfies c(x)

Summary Points Concept Learning as Search through H Hypothesis space H as a state space Learning: finding the correct hypothesis Inductive Leaps Possible Only if Learner Is Biased Futility of learning without bias Strength of inductive bias: proportional to restrictions on hypotheses Modeling Inductive Learners Equivalent inductive learning, deductive inference (theorem proving) problems Hypothesis language: syntactic restrictions (aka representation bias) Views of Learning and Strategies Removing uncertainty (“data compression”) Role of knowledge Integrated Inductive and Analytical Learning Using inductive learning to acquire domain theories for analytical learning Roles of integrated learning in KDD Next Time: Introduction to ANNs