 2002, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Deductive.

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

Explanation-Based Learning (borrowed from mooney et al)
Analytical Learning.
Modelling with expert systems. Expert systems Modelling with expert systems Coaching modelling with expert systems Advantages and limitations of modelling.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Rule Based Systems Michael J. Watts
Learning control knowledge and case-based planning Jim Blythe, with additional slides from presentations by Manuela Veloso.
Chapter 12: Expert Systems Design Examples
CPSC 322, Lecture 19Slide 1 Propositional Logic Intro, Syntax Computer Science cpsc322, Lecture 19 (Textbook Chpt ) February, 23, 2009.
Erasmus University Rotterdam Frederik HogenboomEconometric Institute School of Economics Flavius Frasincar.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
Machine Learning: Symbol-Based
Explanation-Based Learning (EBL) One definition: Learning general problem-solving techniques by observing and analyzing human solutions to specific problems.
Conceptual modelling. Overview - what is the aim of the article? ”We build conceptual models in our heads to solve problems in our everyday life”… ”By.
MACHINE LEARNING. What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
 2002, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Case-based.
INTELLIGENT SYSTEMS Artificial Intelligence Applications in Business.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 4.
Towers of Hanoi. Introduction This problem is discussed in many maths texts, And in computer science an AI as an illustration of recursion and problem.
Artificial Intelligence: Its Roots and Scope
Inference is a process of building a proof of a sentence, or put it differently inference is an implementation of the entailment relation between sentences.
1 Machine Learning: Lecture 11 Analytical Learning / Explanation-Based Learning (Based on Chapter 11 of Mitchell, T., Machine Learning, 1997)
Knowledge representation
Proof Systems KB |- Q iff there is a sequence of wffs D1,..., Dn such that Dn is Q and for each Di in the sequence: a) either Di is in KB or b) Di can.
Machine Learning Chapter 11. Analytical Learning
CSA3212: User Adaptive Systems Dr. Christopher Staff Department of Computer Science & AI University of Malta Lecture 9: Intelligent Tutoring Systems.
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 1. Introduction.
Theory Revision Chris Murphy. The Problem Sometimes we: – Have theories for existing data that do not match new data – Do not want to repeat learning.
CS Learning Rules1 Learning Sets of Rules. CS Learning Rules2 Learning Rules If (Color = Red) and (Shape = round) then Class is A If (Color.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Friday, February 4, 2000 Lijun.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 5.
Pattern-directed inference systems
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 9 Instance-Based.
Copyright © Curt Hill Languages and Grammars This is not English Class. But there is a resemblance.
CS 5751 Machine Learning Chapter 11 Explanation-Based Learning 1 Explanation-Based Learning (EBL) One definition: Learning general problem- solving techniques.
Learning, page 19 CSI 4106, Winter 2005 Learning decision trees A concept can be represented as a decision tree, built from examples, as in this problem.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
Explanation Based Learning (EBL) By M. Muztaba Fuad.
Generic Tasks by Ihab M. Amer Graduate Student Computer Science Dept. AUC, Cairo, Egypt.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
Artificial Intelligence, Expert Systems, and Neural Networks Group 10 Cameron Kinard Leaundre Zeno Heath Carley Megan Wiedmaier.
Some Thoughts to Consider 8 How difficult is it to get a group of people, or a group of companies, or a group of nations to agree on a particular ontology?
CpSc 810: Machine Learning Analytical learning. 2 Copy Right Notice Most slides in this presentation are adopted from slides of text book and various.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Abduction CIS308 Dr Harry Erwin. Contents Definition of abduction An abductive learning method Recommended reading.
CS 5751 Machine Learning Chapter 12 Comb. Inductive/Analytical 1 Combining Inductive and Analytical Learning Why combine inductive and analytical learning?
Lesson 4 Grammar - Chapter 13.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 7.
Forward and Backward Chaining
More Symbolic Learning CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Abductive learning.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
CS 9633 Machine Learning Explanation Based Learning
CHAPTER 1 Introduction BIC 3337 EXPERT SYSTEM.
CS 9633 Machine Learning Concept Learning
Alternative Representations for Artificial Immune Systems
Knowledge Representation
Agent Development Project
What Is Science? Read the lesson title aloud to students.
KNOWLEDGE REPRESENTATION
LEARNING.
Habib Ullah qamar Mscs(se)
Presentation transcript:

 2002, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Deductive (explanation-based) learning

 2002, G.Tecuci, Learning Agents Laboratory 2 Overview The explanation-based learning problem The explanation-based learning method The utility problem Recommended reading Discussion

 2002, G.Tecuci, Learning Agents Laboratory 3 Explanation-based learning problem Given A training example A positive example of a concept to be learned. Learning goal A specification of the desirable features of the concept to be learned from the training example. Background knowledge Prior knowledge that allows proving (explaining) that the training example is indeed a positive example of the concept. Determine A concept definition representing a deductive generalization of the training example that satisfies the learning goal. Purpose of learning Improve the problem solving efficiency of the agent.

 2002, G.Tecuci, Learning Agents Laboratory 4 Explanation-based learning problem: illustration Given Training Example - The description of a particular cup: OWNER(OBJ1, EDGAR) & COLOR(OBJ1, RED) & IS(OBJ1, LIGHT) & PART-OF(CONCAVITY1, OBJ1) & ISA(CONCAVITY1, CONCAVITY) & IS(CONCAVITY1, UPWARD-POINTING) & PART- OF(BOTTOM1, OBJ1) & ISA(BOTTOM1, BOTTOM) & IS(BOTTOM1, FLAT) & PART-OF(BODY1, OBJ1) & ISA(BODY1, BODY) & IS(BODY1, SMALL) & PART-OF(HANDLE1, OBJ1) & ISA(HANDLE1, HANDLE) & LENGTH(HANDLE1, 5) Learning goal Find a sufficient concept definition for CUP, expressed in terms of the features used in the training example ( LIGHT, HANDLE, FLAT, etc.) Background Knowledge  x, LIFTABLE(x) & STABLE(x) & OPEN-VESSEL(x)  CUP(x)  x  y, IS(x, LIGHT) & PART-OF(y, x) & ISA(y, HANDLE)  LIFTABLE(x)  x  y, PART-OF(y, x) & ISA(y, BOTTOM) & IS(y, FLAT)  STABLE(x)  x  y, PART-OF(y,x) & ISA(y, CONCAVITY) & IS(y, UPWARD-POINTING)  OPEN-VESSEL(x) Determine A deductive generalization of the training example that satisfies the learning goal  x  y1  y2  y3, [PART-OF(y1, x) & ISA(y1, CONCAVITY) & IS(y1, UPWARD-POINTING) & PART-OF(y2, x) & ISA(y2, BOTTOM) & IS(y2, FLAT) & IS(x, LIGHT) & PART-OF(y3, x) & ISA(y3, HANDLE) => CUP(x)]

 2002, G.Tecuci, Learning Agents Laboratory 5 Overview The explanation-based learning problem The explanation-based learning method The utility problem Recommended reading Discussion

 2002, G.Tecuci, Learning Agents Laboratory 6 Explanation-based learning method Explain Construct an explanation that proves that the training example is an example of the concept to be learned. Generalize Generalize the found explanation as much as possible so that the proof still holds, and extract from it a concept definition that satisfies the learning goal.

 2002, G.Tecuci, Learning Agents Laboratory 7 Explain - Prove that the training example is a cup: The leaves of the proof tree are those features of the training example that allows one to recognize it as a cup. By building the proof one isolates the relevant features of the training example.

 2002, G.Tecuci, Learning Agents Laboratory 8 The semantic network representation of the cup example. The enclosed features are the relevant ones.

 2002, G.Tecuci, Learning Agents Laboratory 9 Generalize the proof tree as much as possible so that the proof still holds: - replace each rule instance with its general pattern; - find the most general unification of these patterns. OPEN-VESSEL (x1)   OPEN-VESSEL (x2) STABLE (x1)  STABLE (x3) LIFTABLE (x1)  LIFTABLE (x4) Therefore x1=x2=x3=x4=x

 2002, G.Tecuci, Learning Agents Laboratory 10 The leaves of this generalized proof tree represent an operational definition of the concept CUP:  x1  y1  y2  y3, [PART-OF(y1, x1) & ISA(y1, CONCAVITY) & IS(y1, UPWARD-POINTING) & PART-OF(y2, x1) & ISA(y2, BOTTOM) & IS(y2, FLAT) & IS(x1, LIGHT) & PART-OF(y3, x1) & ISA(y3, HANDLE) => CUP(x1)]

 2002, G.Tecuci, Learning Agents Laboratory 11 Discussion How does this learning method improve the efficiency of the problem solving process?

 2002, G.Tecuci, Learning Agents Laboratory 12 The goal of this learning strategy is to improve the efficiency in problem solving. The agent is able to perform some task but in an inefficient manner. We would like to teach the agent to perform the task faster. Consider, for instance, an agent that is able to recognize cups. The agent receives a description of a cup that includes many features. The agent will recognize that this object is a cup by performing a complex reasoning process, based on its prior knowledge. This process is illustrated by the proof tree which demonstrates that object o1 is indeed a cup: The object o1 is light and has a handle. Therefore it is liftable. An so on … being liftable, stable and an open vessel, it is a cup. However, the agent can learn from this process to recognize a cup faster. The next step in the learning process is to generalize the proof tree. While the initial tree proves that the specific object o1 is a cup, the generalized tree proves that any object x which is light, has a handle and some other features is a cup. Therefore, to recognize that an object o2 is a cup, the agent only needs to look for the presence of these features discovered as important. It no longer needs to build a complex proof tree. Therefore cup recognition is done much faster. Finally, notice that the agent needs only one example to learn from. However, it needs a lot of prior knowledge to prove that this example is a cup. Providing such prior knowledge to the agent is a very complex task.

 2002, G.Tecuci, Learning Agents Laboratory 13 Overview The explanation-based learning problem The explanation-based learning method The utility problem Recommended reading Discussion

 2002, G.Tecuci, Learning Agents Laboratory 14 The utility problem: discussion Let us assume that we have learned an operational definition of the concept “cup”. What happens with the efficiency of recognizing cups covered by the learned rule? Why? What happens with the efficiency of recognizing cups when the input is not covered by the learned rule? Why? When does the efficiency increase? How to assure the increase of the efficiency?

 2002, G.Tecuci, Learning Agents Laboratory 15 The utility problem: a solution Cost/benefit formula to estimates the utility of the learned rule on the efficiency of the system: Utility = (AvrSavings * ApplicFreq) - AvrMatchCost where AvrSavings = the average time savings when the rule is applicable ApplicFreq = the probability that the rule is applicable when it is tested AvrMatchCost = the average time cost of matching the rule

 2002, G.Tecuci, Learning Agents Laboratory 16 The utility problem: discussion of the solution Conclusion: The system maintains a statistic on the rule's use during subsequent problem solving, in order to determine its utility. If the rule has a negative utility, it is discarded. Measure the savings requires running the problem solver with and without the rule on each problem. Is this practical? Which would be a good heuristic? Maintain a statistic on the rule's use during subsequent problem solving. How to estimate AvrMatchCost?How to estimate ApplicFreq? How to estimate AvrSavings? Heuristic: One possible solution is to use an estimate of the rule's average savings based on the savings that the rule would have produced on the training example from which it was learned. Measure rule’s matching cost during subsequent problem solving.

 2002, G.Tecuci, Learning Agents Laboratory 17 The utility of the learned rules Explanation-based learning has been introduced as a method for improving the efficiency of a system. Let us consider again the explanation-based system learning an operational definition of the concept CUP. The system has learned a new rule for recognizing a certain kind of cup. This rule does not contain any new knowledge. It is just a compilation of some other rules from the knowledge base. Adding this new rule in the KB has the following effects on system's efficiency: - increases the efficiency in recognizing cups covered by the learned rule; - decreases the efficiency in recognizing cups that are not covered by the learned rule. Adding the operational definition of cup into the KB will increase the global performance of the system only if the first effect is more important then the second one. Both these effects may be combined into a cost/benefit formula that indicates the utility of the rule with respect to the efficiency of the system: Utility = (AvrSavings * ApplicFreq) - AvrMatchCost where AvrSavings = the average time savings when the rule is applicable ApplicFreq = the probability that the rule is applicable when it is tested AvrMatchCost = the average time cost of matching the rule After learning a rule, the system should maintain a statistic on the rule's use during subsequent problem solving, in order to determine its utility. If the rule has a negative utility, it is discarded. Unfortunately, although the match cost and the application frequency can be directly measured during subsequent problem solving, it is more difficult to measure the savings. Doing so would require running the problem solver with and without the rule on each problem. And this would have to be done for all rules. One possible solution is to use an estimate of the rule's average savings based on the savings that the rule would have produced on the training example from which it was learned.

 2002, G.Tecuci, Learning Agents Laboratory 18 Overview The explanation-based learning problem The explanation-based learning method The utility problem Recommended reading Discussion

 2002, G.Tecuci, Learning Agents Laboratory 19 Exercise Given A training Example The following example of “supports”: [ book(book1) & material(book1, rigid) & cup(cup1) & material(cup1, rigid) & above(cup1, book1) & touches(cup1, book1) ] => supports(book1, cup1) Learning goal Find a sufficient concept definition for “supports”, expressed in terms of the features used in the training example. Background Knowledge  x  y [on-top-of(y, x) & material(x, rigid)  supports(x, y)]  x  y [above(x, y) & touches(x, y)  on-top-of(x, y)]  x  y  z [above(x, y) & above(y, z)  above(x, z)] Determine A deductive generalization of the training example that satisfies the learning goal.

 2002, G.Tecuci, Learning Agents Laboratory 20 Discussion Do we need a training example to learn an operational definition of the concept? Why? Answer: The learner does not need a training example. It can simply built proof trees from top-down, starting with an abstract definition of the concept and growing the tree until the leaves are operational features. However, without a training example the learner will learn many operational definitions. The training example focuses the learner on the most typical example.

 2002, G.Tecuci, Learning Agents Laboratory 21 Discussion What is the classification accuracy of deductive learning? What is the classification accuracy of an example classified as positive? Why? What is the classification accuracy of an example classified as negative? Why? How could one improve the classification accuracy?

 2002, G.Tecuci, Learning Agents Laboratory 22 Learning from several positive examples Learn an operational definition from the first example. Consider this as the first term of a disjunctive definition. Eliminate all the examples already covered by this definition. Learn another operational definition from an uncovered example. Eliminate all the examples covered by this new definition and add it as a new term in the disjunctive definition of the concept. Continue this process until there is no training example left.

 2002, G.Tecuci, Learning Agents Laboratory 23 Discussion How to use negative examples? Develop a theory of why something is a negative example of some concept and apply the standard method. Does such an approach make sense when we already have a theory that explains positive examples? Why? Sometimes it is easier to explain that something is a negative example. Could you provide an example of such a case?

 2002, G.Tecuci, Learning Agents Laboratory 24 Discussion How could we apply explanation-based learning to learn inference rules from facts? How could we apply explanation-based learning to learn macro- operators from action plans?

 2002, G.Tecuci, Learning Agents Laboratory 25 Learning inference rules: illustration Given Training Example An input fact:RICE-AREA(VIETNAM) Learning goal Learn a general inference rule allowing the direct derivation of the input fact from facts explicitly represented in the background knowledge (e.g., RAINFALL, CLIMATE, SOIL) Background Knowledge RAINFALL(VIETNAM, HEAVY); CLIMATE(VIETNAM, SUBTROPICAL); SOIL(VIETNAM, RED-SOIL); LOCATION(VIETNAM, SE-ASIA)  x, CLIMATE(x, SUBTROPICAL)  TEMPERATURE(x, WARM)  x, RAINFALL(x, HEAVY)  WATER-SUPPLY(x, HIGH)  x, SOIL(x, RED-SOIL)  SOIL(x, FERTILE-SOIL)  x, WATER-SUPPLY(x, HIGH) & TEMPERATURE(x, WARM) & SOIL(x, FERTILE-SOIL)  RICE-AREA(x) This background knowledge could be used in proving the input fact. Determine A general inference rule that allows the direct derivation of the input from the facts stored in the knowledge base:  x, RAINFALL(x, HEAVY) & CLIMATE(x, SUBTROPICAL) & SOIL(x, RED-SOIL) => RICE-AREA(x)

 2002, G.Tecuci, Learning Agents Laboratory 26 Learning macro-operators: illustration Consider the following situation that involves a robot that could go from one room to another and could push boxes through the doors: InRoom(Robot, Room1) InRoom(Box, Room2) Connects(Door1, Room1, Room2) Connects(Door2, Room2, Room3) Connects(Door3, Room1, Room4) Apply explanation-based learning to learn a general macro-operator from the following example of problem solving episode to achieve the goal: InRoom(Box, Room1) perform the actions: GoThru(Robot, Door1, Room1, Room2) PushThru(Robot, Box, Door1, Room2, Room1) The knowledge of the system consists of the action models and inference rule from the next slide.

 2002, G.Tecuci, Learning Agents Laboratory 27 Learning macro-operators: illustration (cont.) GoThru(a, d, r1, r2) ; robot a goes through door d from room r1 to room r2 Preconditions:InRoom(a, r1); a is in room r1 Connects(d, r1, r2); door d connects room r1 with room r2 Effects:InRoom(a, r2); a is in room r2 PushThru(a, o, d, r1, r2) ; a pushes box o through d from r1 to r2 Preconditions:InRoom(a, r1); a is in room r1 InRoom(o, r1); o is in room r1 Connects(d, r1, r2); door d connects room r1 with room r2 Effects:InRoom(a, r2); a is in room r2 InRoom(o, r2); o is in room r2 Connects(d, r1, r2) => Connects(d, r2, r1) ; if d connects r1 with r2 then it also connects r2 with r1.

 2002, G.Tecuci, Learning Agents Laboratory 28 Learning macro-operators: illustration (cont.) GoAndPushThru(a, o, d1, d2, r1, r2, r3) ; the robot goes from room r1 into room r2 and pushes the box into room r3 Preconditions:InRoom(a, r1) ; a is in room r1 InRoom(o, r2) ; o is in room r2 Connects(d1, r1, r2) ; door d1 connects room r1 with room r2 Connects(d2, r2, r3) ; door d2 connects room r2 with room r3 Effects:InRoom(a, r3); a is in room r3 InRoom(o, r3); o is in room r3

 2002, G.Tecuci, Learning Agents Laboratory 29 Discussion How does deductive learning compare with inductive learning? Input examples Background knowledge Type of inference Result of learning EIL – many, both positive and negative EBL – only one positive example EIL – very little needed (e.g. generalization hierarchy) EBL – complete and correct domain theory EIL – inductive EBL – deductive EIL – improves system’s competence EBL – improves system’s efficiency What comparison criteria to consider?

 2002, G.Tecuci, Learning Agents Laboratory 30 Needs only one example Requires complete knowledge about the concept (which makes this learning strategy impractical). Improves agent's efficiency in problem solving General features of explanation-based learning Shows the importance of explanations in learning

 2002, G.Tecuci, Learning Agents Laboratory 31 Recommended reading Mitchell T.M., Machine Learning, Chapter 11: Analytical Learning, pp , McGraw Hill, Mitchell T.M., Keller R.M., Kedar-Cabelli S.T., Explanation-Based Generalization: A Unifying View, Machine Learning 1, pp , Also in Readings in Machine Learning, J.W.Shavlik, T.G.Dietterich (eds), Morgan Kaufmann, DeJong G., Mooney R., Explanation-Based Learning: An Alternative View, Machine Learning 2, Also in Readings in Machine Learning, J.W.Shavlik, T.G.Dietterich (eds), Morgan Kaufmann, Tecuci G. & Kodratoff Y., Apprenticeship Learning in Imperfect Domain Theories, in Kodratoff Y. & Michalski R. (eds), Machine Learning, vol 3, Morgan Kaufmann, S. Minton, Quantitative Results Concerning the Utility of Explanation-Based Learning, in Artificial Intelligence, vol. 42, pp , Also in Shavlik J. and Dietterich T. (eds), Readings in Machine Learning, Morgan Kaufmann, 1990.