처음 페이지로 이동 Chapter 11: Analytical Learning Inductive learning training examples n Analytical learning prior knowledge + deductive reasoning n Explanation.

Slides:



Advertisements
Similar presentations
Explanation-Based Learning (borrowed from mooney et al)
Advertisements

Analytical Learning.
Concept Learning and the General-to-Specific Ordering
Combining Inductive and Analytical Learning
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
 2002, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Deductive.
Knowledge Representation and Reasoning Learning Sets of Rules and Analytical Learning Harris Georgiou – 4.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Knowledge in Learning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 19 Spring 2004.
Learning control knowledge and case-based planning Jim Blythe, with additional slides from presentations by Manuela Veloso.
Università di Milano-Bicocca Laurea Magistrale in Informatica
Knowledge in Learning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 19 Spring 2005.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
Feature Selection for Regression Problems
Machine Learning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning.
Chapter 1 Conducting & Reading Research Baumgartner et al Chapter 1 Nature and Purpose of Research.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
MACHINE LEARNING. What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
Part I: Classification and Bayesian Learning
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Inductive Logic Programming Includes slides by Luis Tari CS7741L16ILP.
1 Machine Learning: Lecture 11 Analytical Learning / Explanation-Based Learning (Based on Chapter 11 of Mitchell, T., Machine Learning, 1997)
Machine Learning Chapter 11. Analytical Learning
For Friday Read chapter 18, sections 3-4 Homework: –Chapter 14, exercise 12 a, b, d.
1 Machine Learning What is learning?. 2 Machine Learning What is learning? “That is what learning is. You suddenly understand something you've understood.
Machine Learning Chapter 11.
Benk Erika Kelemen Zsolt
November 10, Machine Learning: Lecture 9 Rule Learning / Inductive Logic Programming.
1 Concept Learning By Dong Xu State Key Lab of CAD&CG, ZJU.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Chapter 2: Concept Learning and the General-to-Specific Ordering.
Multi-Relational Data Mining: An Introduction Joe Paulowskey.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
CpSc 810: Machine Learning Concept Learning and General to Specific Ordering.
Concept Learning and the General-to-Specific Ordering 이 종우 자연언어처리연구실.
Outline Inductive bias General-to specific ordering of hypotheses
Overview Concept Learning Representation Inductive Learning Hypothesis
Explanation Based Learning (EBL) By M. Muztaba Fuad.
For Monday Finish chapter 19 No homework. Program 4 Any questions?
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
Machine Learning Concept Learning General-to Specific Ordering
CpSc 810: Machine Learning Analytical learning. 2 Copy Right Notice Most slides in this presentation are adopted from slides of text book and various.
Data Mining and Decision Support
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
CS 5751 Machine Learning Chapter 12 Comb. Inductive/Analytical 1 Combining Inductive and Analytical Learning Why combine inductive and analytical learning?
Introduction Machine Learning: Chapter 1. Contents Types of learning Applications of machine learning Disciplines related with machine learning Well-posed.
More Symbolic Learning CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Concept Learning and The General-To Specific Ordering
Computational Learning Theory Part 1: Preliminaries 1.
Chap. 10 Learning Sets of Rules 박성배 서울대학교 컴퓨터공학과.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Anifuddin Azis LEARNING. Why is learning important? So far we have assumed we know how the world works Rules of queens puzzle Rules of chess Knowledge.
CS 9633 Machine Learning Explanation Based Learning
CS 9633 Machine Learning Concept Learning
CS 9633 Machine Learning Inductive-Analytical Methods
Chapter 11: Learning Introduction
Machine Learning: Lecture 3
Introduction to Machine Learning and Knowledge Representation
Knowledge in Learning Chapter 19
Machine Learning: Lecture 6
Machine Learning: UNIT-3 CHAPTER-1
Machine Learning Chapter 2
Version Space Machine Learning Fall 2018.
Machine Learning Chapter 2
Presentation transcript:

처음 페이지로 이동 Chapter 11: Analytical Learning Inductive learning training examples n Analytical learning prior knowledge + deductive reasoning n Explanation based learning - prior knowledge : analyze, explain how each training example satisfies the target concept - distinguish relevant features = generalization based on logical reasoning - applied to learning search control rules

처음 페이지로 이동 Introduction n Inductive learning poor performance when insufficient data n Explanation based learning (1) accept explicit prior knowledge (2) generalize more accurately than inductive system (3) prior knowledge = reduce complexity of hypotheses space reduce sample complexity improve generalization accuracy

처음 페이지로 이동 n Task of learning play chess - target concept chessboard positions in which black will lose its queen within two moves - human explain/analyze training examples by prior knowledge - knowledge = legal rules of chess

처음 페이지로 이동 n Chapter summary - Learning algorithm that automatically construct/learn from explanation - Analytical learning problem definition - PROLOG-EBG algorithm - General properties, relationship to inductive learning algorithm - Application to improving performance at large state-space search problems

처음 페이지로 이동 Inductive Generalization Problem n Given: Instances Hypotheses Target Concept Training examples of target concept n Determine: Hypotheses consistent with the training examples

처음 페이지로 이동 Analytical Generalization Problem n Given: Instances Hypotheses Target Concept Training Examples Domain Theory n Determine: Hypotheses consistent with training examples and domain theory

처음 페이지로 이동 Example of an analytical learning problem n Instance space : describe a pair of objects Color, Volume, Owner, Material, Density, On n Hypothesis space H SafeToStack(x,y) Volume(x,vx) ^ Volume(y,vy) ^ LessThan(vx,vy) n Target concept : SafeToStack(x,y) “pairs of physical objects, such that one can be stacked safely on the other”

처음 페이지로 이동 n Training Examples On(Obj1, Obj2)Owner(Obj1, Fred) Type(Obj1, Box)Owner(Obj2,Louise) Type(Obj2, Endtable)Density(Obj1,0.3) Color(Obj1,Red)Material(Obj1,Cardboard) Color(Obj2,Blue)Material(Obj2,Wood) Volume(Obj1,2) n Domain Theory B SafeToStack(x,y) ~Fragile(y) SafeToStack(x,y) Lighter(x,y) Lighter(x,y) Weight(x,wx) ^ Weight(y,wy) ^ LessThan(wx,wy) Weight(x,w) Volume(x,v) ^ Density(x,d) ^ Equal(w,times(v,d)) Weight(x,5) Type(x,Endtable) Fragile(x) Material(x,Glass)

처음 페이지로 이동 n Domain Theory B - explain why certain pairs of objects can be safely stacked on one another - described by a collection of Horn clause : enable system to incorporate any learned hypotheses into subsequent domain theories

처음 페이지로 이동 Learning with Perfect Domain Theories : PROLOG-EBG n Correct = assertions are truthful statements n Complete = covers every positive examples n Perfect domain theory is available? Yes n Why does it need to learn when perfect domain theory is given?

처음 페이지로 이동 PROLOG-EBG n Operation (1) Leaning a single Horn clause rule (2) Removing positive examples covered (3) Iterating this process n Given a complete/correct domain theory --> output a hypothesis (correct, cover observed positive training examples)

처음 페이지로 이동 PROLOG-EBG Algorithm PROLOG-EBG(Target Concept,Training Examples,Domain Theory) Learned Rules {} Pos the positive examples from Training Examples for each Positive Examples in Pos that is not covered by Learned Rules, do 1. Explain: - Explanation explanation in terms of Domain Theory that Positive Examples satisfies the Target Concept 2. Analyze: - Sufficient Conditions the most general set of features of Positive Examples sufficient to satisfy the Target Concept according to the Explanation 3. Refine: - Learned Rules Learned Rules + NewHornClause Target Concept Sufficient Conditions Return Learned Rules

처음 페이지로 이동

처음 페이지로 이동 n Weakest Preimage The weakest preimage of a conclusion C with respect to a proof P is the most general set of initial assertions A, such that A entails C according to P the most general rules SafeToStack(x,y) Volume(x,vx) ^ Density(x,dx) ^ Equal(wx,times(vx,dx)) ^ LessThan(wx,5) ^ Type(y,Endtable)

처음 페이지로 이동

처음 페이지로 이동

처음 페이지로 이동 Remarks on Explanation-Based Learning n Properties (1) justified general hypotheses by using prior knowledge (2) explanation determines relevant attributes (3) regressing the target concept allows deriving more general constraints (4) learned Horn clause = sufficient condition to satisfy target concept (5) implicitly assume the complete/correct domain theory (6) generality of the Horn clause depends on the formulation of the domain theory

처음 페이지로 이동 n Perspectives on example based learning (1) EBL as theory-guided generalization (2) EBL as example-guided reformation of theories (3) EBL as “just” restating what the learner already “knows” n Knowledge compilation - reformulate the domain theory to produce general rules that classify examples in a single inference step - transformation = efficiency improving task without altering correctness of system’s knowledge

처음 페이지로 이동 Characteristics n Discovering New Features - learned feature : feature by hidden units of neural networks n Deductive Learning - background knowledge of ILP : enlarge the set of hypotheses - domain theory : reduce the set of acceptable hypotheses n Inductive Bias - inductive bias of PROLOG-EBG = domain theory B - Approximate inductive bias of PROLOG-EBG = domain theory B + preference for small sets of maximally general Horn clauses

처음 페이지로 이동 n LEMMA-ENUMERATOR algorithm - enumerate all proof trees - for each proof tree, calculate the weakest preimage and construct a Horn clause - ignore the training data - output a superset of Horn clauses output by PROLOG-EBG n Role of training data focus algorithm on generating rules that cover the distribution of instances that occur in practice n Observed positive example allow generalizing deductively to other unseen instances IF ((PlayTennis = YES) (Humidity=x)) THEN ((PlayTennis = YES) (Humidity <= x)

처음 페이지로 이동 n Knowledge-level learning - the learned hypothesis entails predictions that go beyond those entailed by the domain theory - deductive closure : set of all predictions entailed by a set of assertions n Determinations - some attribute of the instance is fully determined by certain other attributes, without specifying the exact nature of the dependency - example target concept : “people who speak Portuguese” domain theory : “ the language spoken by a person is determined by their nationality” training example : “Joe, 23-year-old Brazilian, speaks Portuguese” conclusion : “all Brazilians speak Portuguese”

처음 페이지로 이동 Explanation-based Learning of Search Control Knowledge n Speed up complex search programs n Complete/Correct domain theory for learning search control knowledge = definition of legal search operator + definition of the search objective n Problem find a sequence of operators that will transform an arbitrary initial state S to some final state F that satisfies the goal predicate G

처음 페이지로 이동 PRODIGY n Domain-independent planning system n find a sequence of operators that leads from S to O n means-ends planner decompose problems into subgoals solve them combine their solution into a solution for the full problem

처음 페이지로 이동 SORA System n Support a broad variety of problem-solving strategies n Learned by explaining situations in which its current strategy leads to inefficiencies

처음 페이지로 이동 Practical Problems applying EBL to learning search control n The number of control rules that must be learned is very large (1) efficient algorithms for matching rules (2) utility analysis : estimating the computational cost and benefit of each rule (3) identify types of rules that will be costly to match re-expressing such rules in more efficient forms optimizing rule-matching algorithm

처음 페이지로 이동 n Constructing the explanations for the desired target concept is intractable (1) example - states for which operator A leads toward the optimal solution (2) “lazy” or “incremental” explanation - heuristics are used to produce partial/approximate and tractable explanation - learned rules may be imperfect - monitoring performance of the rule on subsequent cases - when error, original explanation is elaborated to cover new case, - more refined rules is extracted