 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 10.

Slides:



Advertisements
Similar presentations
Explanation-Based Learning (borrowed from mooney et al)
Advertisements

Analytical Learning.
Modelling with expert systems. Expert systems Modelling with expert systems Coaching modelling with expert systems Advantages and limitations of modelling.
 2002, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Deductive.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Rule Based Systems Alford Academy Business Education and Computing
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
AI Week 22 Machine Learning Data Mining Lee McCluskey, room 2/07
Specialized Business Information Systems Chapter 11.
Machine Learning: Symbol-Based
Developing Ideas for Research and Evaluating Theories of Behavior
Chapter 12: Intelligent Systems in Business
MSIS 110: Introduction to Computers; Instructor: S. Mathiyalakan1 Specialized Business Information Systems Chapter 11.
Introduction to Machine Learning Approach Lecture 5.
 2002, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Case-based.
INTELLIGENT SYSTEMS Artificial Intelligence Applications in Business.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 4.
Inductive Logic Programming Includes slides by Luis Tari CS7741L16ILP.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
CS 478 – Tools for Machine Learning and Data Mining The Need for and Role of Bias.
1 Machine Learning: Lecture 11 Analytical Learning / Explanation-Based Learning (Based on Chapter 11 of Mitchell, T., Machine Learning, 1997)
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 1. Introduction.
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Theory Revision Chris Murphy. The Problem Sometimes we: – Have theories for existing data that do not match new data – Do not want to repeat learning.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Friday, February 4, 2000 Lijun.
Abduction CIS308 Dr Harry Erwin. Three Perspectives on Induction Ronald Fisher—use p-values to compare hypotheses. Jerzy Neyman—define your error probabilities.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 5.
Learning Agents Center George Mason University Computer Science Department Partners Day Symposium May 4, 2004 Gheorghe Tecuci, Mihai Boicu, Dorin Marcu,
Principles of Information Systems, Sixth Edition Specialized Business Information Systems Chapter 11.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Principles of Information Systems, Sixth Edition Specialized Business Information Systems Chapter 11.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 9 Instance-Based.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 1.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
Overview Concept Learning Representation Inductive Learning Hypothesis
1 Inductive Learning (continued) Chapter 19 Slides for Ch. 19 by J.C. Latombe.
KU NLP Machine Learning1 Ch 9. Machine Learning: Symbol- based  9.0 Introduction  9.1 A Framework for Symbol-Based Learning  9.2 Version Space Search.
Learning Agents Center George Mason University Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, May 2004 Gheorghe Tecuci with.
Generic Tasks by Ihab M. Amer Graduate Student Computer Science Dept. AUC, Cairo, Egypt.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
1 Knowledge Acquisition and Learning by Experience – The Role of Case-Specific Knowledge Knowledge modeling and acquisition Learning by experience Framework.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
CpSc 810: Machine Learning Analytical learning. 2 Copy Right Notice Most slides in this presentation are adopted from slides of text book and various.
Data Mining and Decision Support
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Abduction CIS308 Dr Harry Erwin. Contents Definition of abduction An abductive learning method Recommended reading.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 7.
CS Machine Learning Instance Based Learning (Adapted from various sources)
More Symbolic Learning CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Concept Learning and The General-To Specific Ordering
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Abductive learning.
From NARS to a Thinking Machine Pei Wang Temple University.
On Abductive Equivalence Katsumi Inoue National Institute of Informatics Chiaki Sakama Wakayama University MBR
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
MDD-Kurs / MDA Cortex Brainware Consulting & Training GmbH Copyright © 2007 Cortex Brainware GmbH Bild 1Ver.: 1.0 How does intelligent functionality implemented.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Gheorghe Tecuci 1,2, Mihai Boicu 1, Dorin Marcu 1 1 Learning Agents Laboratory, George Mason University 2 Center for Strategic Leadership, US Army War.
Chapter 7. Propositional and Predicate Logic
CS 9633 Machine Learning Explanation Based Learning
CS 9633 Machine Learning Concept Learning
Input Space Partition Testing CS 4501 / 6501 Software Testing
REASONING WITH UNCERTANITY
Chapter 11: Learning Introduction
Learning Agents Prof. Gheorghe Tecuci Learning Agents Laboratory
Knowledge Acquisition and Problem Solving
Chapter 7. Propositional and Predicate Logic
10 Case-Based Reasoning and Learning
Generalized Diagnostics with the Non-Axiomatic Reasoning System (NARS)
Presentation transcript:

 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 10 Multistrategy Learning

 2003, G.Tecuci, Learning Agents Laboratory 2 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

 2003, G.Tecuci, Learning Agents Laboratory 3 Multistrategy learning Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated.

 2003, G.Tecuci, Learning Agents Laboratory 4 Examples Learning from examples Explanation- based learning Multistrategy Knowledge many one several learning needed Effect on agent's behavior improves competence improves efficiency improves competence or/ and efficiency Type of inference induction deduction induction and/ or deduction Complementariness of learning strategies completeincomplete knowledge very little knowledge needed Case Study: Inductive Learning vs Explanation-based Learning

 2003, G.Tecuci, Learning Agents Laboratory 5 Multistrategy concept learning Input Background Knowledge (Domain Theory) Goal The Learning Problem One or more positive and/or negative examples of a concept Weak, incomplete, partially incorrect, or complete Learn a concept description characterizing the example(s) and consistent with the background knowledge by combining several learning strategies

 2003, G.Tecuci, Learning Agents Laboratory 6 Multistrategy knowledge base refinement The Learning Problem: Improve the knowledge base so that the Inference Engine solves (classifies) correctly the training examples. Similar names:background knowledge – domain theory – knowledge base knowledge base refinement - theory revision Training Knowledge Knowledge Base Refinement Multistrategy Examples Base (DT) Inference Engine Knowledge Base (DT) Inference Engine Improved

 2003, G.Tecuci, Learning Agents Laboratory 7 Types of theory errors (in a rule based system)  graspable(x) has-handle(x) Incorrect KB (theory) Overly Specific Overly General Missing Rule Extra Rule Missing Premise Additional Premise shape(x, round)  graspable(x) insulating(x)  graspable(x) width(x, small)& insulating(x)& shape(x, round)  graspable(x) width(x, small) What is the effect of each error on the system’s ability to classify graspable objects, or other objects that need to be graspable, such as cups? Proofs for some positive examples cannot be built: Proofs for some negative examples can be built: Positive examples that are not round, or have a handle Negative examples that are round, or are insulating but not small How would you call a KB where some positive examples are not explained (classified as positive)? How would you call a KB where some negative examples are wrongly explained (classified as positive)? _ Positive examples Negative examples

 2003, G.Tecuci, Learning Agents Laboratory 8 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

 2003, G.Tecuci, Learning Agents Laboratory 9 EBL-VS: Combining EBL with Version Spaces Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples. Produce an abstract illustration of this algorithm.

 2003, G.Tecuci, Learning Agents Laboratory 10 EBL-VS features Learns from positive and negative examples Justify the following feature, considering several cases: Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples.

 2003, G.Tecuci, Learning Agents Laboratory 11 EBL-VS features Justify the following feature: Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples. Can learn with an incomplete background knowledge

 2003, G.Tecuci, Learning Agents Laboratory 12 EBL-VS features Justify the following feature: Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples. Can learn with different amounts of knowledge, from knowledge-free to knowledge-rich

 2003, G.Tecuci, Learning Agents Laboratory 13 EBL-VS features summary and references Learns from positive and negative examples Can learn with an incomplete background knowledge Can learn with different amounts of knowledge, from knowledge-free to knowledge-rich References Hirsh, H., "Combining Empirical and Analytical Learning with Version Spaces," in Proc. of the Sixth International Workshop on Machine Learning, A. M. Segre (Ed.), Cornell University, Ithaca, New York, June 26-27, Hirsh, H., "Incremental Version-space Merging," in Proceedings of the 7th International Machine Learning Conference, B.W. Porter and R.J. Mooney (Eds.), Austin, TX, 1990.

 2003, G.Tecuci, Learning Agents Laboratory 14 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

 2003, G.Tecuci, Learning Agents Laboratory 15 IOU: Induction Over Unexplained Limitation of EBL-VS Assumes that at least one generalization of an example is correct and complete IOU Knowledge base could be incomplete but correct: - the explanation-based generalization of an example may be incomplete; - the knowledge base may explain negative examples. Learns concepts with both explainable and conventional aspects Justify the following limitation of EBL-VS:

 2003, G.Tecuci, Learning Agents Laboratory 16 IOU method Apply explanation-based learning to generalize each positive example Disjunctively combine these generalizations (this is the explanatory component C e ) Disregard negative examples not satisfying C e and remove the features mentioned in C e from all the examples Apply empirical inductive learning to determine a generalization of the reduced set of simplified examples (this is the non-explanatory component C n ) The learned concept is C e & C n

 2003, G.Tecuci, Learning Agents Laboratory 17 IOU: illustration Positive examples of cups: Cup1, Cup2 Negative examples: Shot-Glass1, Mug1, Can1 Domain Theory: incomplete - contains a definition of a generalization of the concept to be learned (e.g. contains a definition of drinking vessels but no definition of cups) C e = has-flat-bottom(x) & light(x) & up-concave(x) & {[width(x,small) & insulating(x)]  has-handle(x)} C e covers Cup1, Cup2, Shot-Glass1, Mug1 but not Can1 C n = volume(x,small) C n covers Cup1, Cup2 but not Shot-Glass1, Mug1 C = C e & C n Mooney, R.J. and Ourston, D., "Induction Over Unexplained: Integrated Learning of Concepts with Both Explainable and Conventional Aspects,", in Proc. of the Sixth International Workshop on Machine Learning, A.M. Segre (Ed.), Cornell University, Ithaca, New York, June 26-27, 1989.

 2003, G.Tecuci, Learning Agents Laboratory 18 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

 2003, G.Tecuci, Learning Agents Laboratory 19 Enigma: Guiding Induction by Domain Theory Limitations of IOU Knowledge base rules have to be correct Examples have to be noise-free ENIGMA Knowledge base rules could be partially incorrect Examples may be noisy Justify the following limitations of IOU:

 2003, G.Tecuci, Learning Agents Laboratory 20 Enigma: method Trades-off the use of knowledge base rules against the coverage of examples: Successively specialize the abstract definition D of the concept to be learned by applying KB rules Whenever a specialization of the definition D contains operational predicates, compare it with the examples to identify the covered and the uncovered ones Decide between performing: - a KB-based deductive specialization of D - an example-based inductive modification of D The learned concept is a disjunction of leaves of the specialization tree built.

 2003, G.Tecuci, Learning Agents Laboratory 21 Enigma: illustration Examples (4 positive, 4 negative) Positive example4 (p4): light(o4) & support(o4,b) & body(o4,a) & above(a,b) & up-concave(o4)  Cup(o4) Background Knowledge Liftable(x) & Stable(x) & Open-vessel(x)  Cup(x) light(x) & has-handle(x)  Liftable(x) has-flat-bottom(x)  Stable(x) body(x, y) & support(x, z) & above(y, z)  Stable(x) up-concave(x)  Open-vessel(x) KB: - partly overly specific (explains only p1 and p2) - partly overly general (explains n3) Operational predicates start with a lower-case letter

 2003, G.Tecuci, Learning Agents Laboratory 22 Enigma: illustration (cont.) (to cover p3,p4) (to uncover n2,n3) Classification is based only on operational features:

 2003, G.Tecuci, Learning Agents Laboratory 23 Learned concept light(x) & has-flat-bottom(x) &has-small-bottom(x)  Cup(x) Covers p1, p3 light(x) & body(x, y) & support(x, z) & above(y, z) & up-concave(x)  Cup(x) Covers p2, p4

 2003, G.Tecuci, Learning Agents Laboratory 24 Application Diagnosis of faults in electro-mechanical devices through an analysis of their vibrations 209 examples and 6 classes Typical example: 20 to 60 noisy measurements taken in different points and conditions of the device A learned rule: IFthe shaft rotating frequency is w0 and the harmonic at w0 has high intensity and the harmonic at 2w0 has high intensity in at least two measurements THENthe example is an instance of C1 (problems in the joint), C4 (basement distortion) or C5 (unbalance)

 2003, G.Tecuci, Learning Agents Laboratory 25 Application (cont.) Comparison between the KB learned by ENIGMA and the hand-coded KB of the expert system MEPS Bergadano, F., Giordana, A. and Saitta, L., "Automated Concept Acquisition in Noisy Environments," IEEE Transactions on Pattern Analysis and Machine Intelligence, 10 (4), pp , Bergadano, F., Giordana, A., Saitta, L., De Marchi D. and Brancadori, F., "Integrated Learning in a Real Domain," in B.W. Porter and R.J. Mooney (Eds. ), Proceedings of the 7th International Machine Learning Conference, Austin, TX, Bergadano, F. and Giordana, A., "Guiding Induction with Domain Theories," in Machine Learning: An Artificial Intelligence Approach Vollume 3, Y. Kodratoff and R.S. Michalski (Eds.), San Mateo, CA, Morgan Kaufmann, 1990.

 2003, G.Tecuci, Learning Agents Laboratory 26 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

 2003, G.Tecuci, Learning Agents Laboratory 27 MTL-JT: Multistrategy Task-adaptive Learning based on Plausible Justification Trees MTL-JT: Multistrategy Task-adaptive Learning based on Plausible Justification Trees Deep integration of learning strategies Integration of the elementary inferences that are employed by the single- strategy learning methods (e.g. deduction, analogy, empirical inductive prediction, abduction, deductive generalization, inductive generalization, inductive specialization, analogy-based generalization). Dynamic integration of learning strategies The order and the type of the integrated strategies depend of the relationship between the input information, the background knowledge and the learning goal. Different types of input (e.g. facts, concept examples, problem solving episodes) Different types of knowledge pieces in the knowledge base (e.g. facts, examples, implicative relationships, plausible determinations)

 2003, G.Tecuci, Learning Agents Laboratory 28 MTL-JT: assumptions Input: correct (noise free) one or several examples, facts, or problem solving episodes Knowledge Base: incomplete and/or partially incorrect may include a variety of knowledge types (facts, examples, implicative or causal relationships, hierarchies, etc.) Learning Goal: extend, update and/or improve the knowledge base so as to integrate new input information

 2003, G.Tecuci, Learning Agents Laboratory 29 Plausible justification tree A plausible justification tree is like a proof tree, except that some of individual inference steps are deductive, while others are non- deductive or only plausible (e.g. analogical, abductive, inductive).

 2003, G.Tecuci, Learning Agents Laboratory 30 Learning method For the first positive example I1: - build a plausible justification tree T of I1 - build the plausible generalization Tu of T - generalize the KB to entail Tu For each new positive example Ii: - generalize Tu so as to cover a plausible justification tree of Ii - generalize the KB to entail the new Tu For each new negative example Ii: - specialize Tu so as not to cover any plausible justification of Ii - specialize the KB to entail the new Tu without entailing the previous Tu Learn different concept definitions: - extract different concept definitions from the general justification tree Tu

 2003, G.Tecuci, Learning Agents Laboratory 31 Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil) Plausible determination: rainfall(x, y) >= water-in-soil(x, z) Deductive rules: soil(x, loamy)  soil(x, fertile-soil) climate(x, subtropical)  temperature(x, warm) climate(x, tropical)  temperature(x, warm) water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil)  grows(x, rice) MTL-JT: illustration from Geography Knowledge Base

 2003, G.Tecuci, Learning Agents Laboratory 32 Positive and negative examples of "grows(x, rice)" Positive Example 1: rainfall(Thailand, heavy) & climate(Thailand, tropical) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia)  grows(Thailand, rice) Positive Example 2: rainfall(Pakistan, heavy) & climate(Pakistan, subtropical) & soil(Pakistan, loamy) & terrain(Pakistan, flat) & location(Pakistan, SW-Asia)  grows(Pakistan, rice) Negative Example 3: rainfall(Jamaica, heavy) & climate(Jamaica, tropical) & soil(Jamaica, loamy) & terrain(Jamaica, abrupt) & location(Jamaica, Central-America)  ¬ grows(Jamaica, rice)

 2003, G.Tecuci, Learning Agents Laboratory 33 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice)

 2003, G.Tecuci, Learning Agents Laboratory 34 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice) Justify the inferences from the above tree: analogy Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Plausible determination: rainfall(x, y) >= water-in-soil(x, z)

 2003, G.Tecuci, Learning Agents Laboratory 35 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice) Justify the inferences from the above tree: deduction Deductive rules: soil(x, loamy)  soil(x, fertile-soil) climate(x, subtropical)  temperature(x, warm) climate(x, tropical)  temperature(x, warm) water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil)  grows(x, rice)

 2003, G.Tecuci, Learning Agents Laboratory 36 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice) Justify the inferences from the above tree: inductive prediction & abduction Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil)

 2003, G.Tecuci, Learning Agents Laboratory 37 Multitype generalization

 2003, G.Tecuci, Learning Agents Laboratory 38 Multitype generalization Justify the generalizations from the above tree: generalization based on analogy Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Plausible determination: rainfall(x, y) >= water-in-soil(x, z)

 2003, G.Tecuci, Learning Agents Laboratory 39 Multitype generalization Justify the generalizations from the above tree: Inductive generalization Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil)

 2003, G.Tecuci, Learning Agents Laboratory 40 Build the plausible generalization Tu of T

 2003, G.Tecuci, Learning Agents Laboratory 41 Positive example 2 Instance of the current Tu corresponding to Example 2 Plausible justification tree T2 of Example 2:

 2003, G.Tecuci, Learning Agents Laboratory 42 Positive example 2 The explanation structure S 2 : The new Tu:

 2003, G.Tecuci, Learning Agents Laboratory 43 Negative example 3 Instance of Tu corresponding to the Negative Example 3: The new Tu:

 2003, G.Tecuci, Learning Agents Laboratory 44 The plausible generalization tree corresponding to the three input examples The plausible generalization tree corresponding to the three input examples

 2003, G.Tecuci, Learning Agents Laboratory 45 Learned knowledge New facts: water-in-soil(Thailand, high) water-in-soil(Pakistan, high) Why is it reasonable to consider these facts to be true?

 2003, G.Tecuci, Learning Agents Laboratory 46 Learned knowledge Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil) New plausible rule: soil(x, red-soil)  soil(x, fertile-soil)

 2003, G.Tecuci, Learning Agents Laboratory 47 Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Learned knowledge Specialized plausible determination: rainfall(x, y) & terrain(x, flat) >= water-in-soil(x, z) Positive Example 1: rainfall(Thailand, heavy) & climate(Thailand, tropical) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia)  grows(Thailand, rice) Positive Example 2: rainfall(Pakistan, heavy) & climate(Pakistan, subtropical) & soil(Pakistan, loamy) & terrain(Pakistan, flat) & location(Pakistan, SW-Asia)  grows(Pakistan, rice) Negative Example 3: rainfall(Jamaica, heavy) & climate(Jamaica, tropical) & soil(Jamaica, loamy) & terrain(Jamaica, abrupt) & location(Jamaica, Central-America)  ¬ grows(Jamaica, rice)

 2003, G.Tecuci, Learning Agents Laboratory 48 Learned knowledge: concept definitions Operational definition of "grows(x, rice)": rainfall(x,heavy) & terrain(x,flat) & [climate(x,tropical)  climate(x,subtropical)] & [soil(x,red-soil)  soil(x,loamy)]  grows(x, rice) Abstract definition of "grows(x, rice)": water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil)  grows(x, rice)

 2003, G.Tecuci, Learning Agents Laboratory 49 Learned knowledge: example abstraction Abstraction of Example 1: water-in-soil(Thailand, high) & temperature(Thailand, warm) & soil(Thailand, fertile-soil)  grows(Thailand, rice)

 2003, G.Tecuci, Learning Agents Laboratory 50 Features of the MTL-JT method and reference Is general and extensible Integrates dynamically different elementary inferences Uses different types of generalizations Is able to learn from different types of input Is able to learn different types of knowledge Exhibits synergistic behavior May behave as any of the integrated strategies Tecuci, G., "An Inference-Based Framework for Multistrategy Learning," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.

 2003, G.Tecuci, Learning Agents Laboratory 51 Features of the MTL-JT method Integrates dynamically different elementary inferences Justify the following feature:

 2003, G.Tecuci, Learning Agents Laboratory 52 Features of the MTL-JT method Justify the following features: May behave as any of the integrated strategies Explanation-based learning Multiple-example explanation-based learning Learning by abduction Learning by analogy Inductive learning from examples What strategies should we consider for the presented illustration of MTL-PJT?

 2003, G.Tecuci, Learning Agents Laboratory 53 MTL-JT as explanation-based learning  x, rainfall(x, heavy)   water-in-soil(x, high)  x, soil(x, red-soil)   soil(x, fertile-soil) Assume the KB would contain the knowledge:

 2003, G.Tecuci, Learning Agents Laboratory 54 MTL-JT as abductive learning  x, rainfall(x, heavy)   water-in-soil(x, high) Assume the KB would contain the knowledge:

 2003, G.Tecuci, Learning Agents Laboratory 55 MTL-JT as inductive learning from examples

 2003, G.Tecuci, Learning Agents Laboratory 56 MTL-JT as analogical learning Let us suppose that the KB contains only the following knowledge that is related to Example 1: Facts:rainfall(Philippines, heavy), water-in-soil(Philippines, high) Determination: rainfall(x, y) --> water-in-soil(x, z) Then the system can only infer that "water-in-soil(Thailand, high)", by analogy with "water-in-soil(Philippines, high)". In this case, the MTL-JT method reduces to analogical learning.

 2003, G.Tecuci, Learning Agents Laboratory 57 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

 2003, G.Tecuci, Learning Agents Laboratory 58 Research issues in multistrategy learning Comparisons of learning strategies New ways of integrating learning strategies Synergistic integration of a wide range of learning strategies The representation and use of learning goals in multistrategy systems Dealing with incomplete or noisy examples Evaluation of the certainty of the learned knowledge General frameworks for multistrategy learning More comprehensive theories of learning Investigation of human learning as multistrategy learning Integration of multistrategy learning and knowledge acquisition Integration of multistrategy learning and problem solving

 2003, G.Tecuci, Learning Agents Laboratory 59 Exercise Compare the following learning strategies: -Rote learning -Inductive learning from examples -Explanation-based learning -Abductive learning -Analogical learning -Instance-based learning -Case-based learning From the point of view of their input, background knowledge, type of inferences performed, and effect on system’s performance.

 2003, G.Tecuci, Learning Agents Laboratory 60 Exercise Identify general frameworks for multistrategy learning, based on the multistrategy learning methods presented.

 2003, G.Tecuci, Learning Agents Laboratory 61 Basic references Proceedings of the International Conferences on Machine Learning, ICML-87, …, ICML-04, Morgan Kaufmann, San Mateo, Proceedings of the International Workshops on Multistrategy Learning, MSL-91, MSL-93, MSL-96, MSL-98. Special Issue on Multistrategy Learning, Machine Learning Journal, Special Issue on Multistrategy Learning, Informatica, vol. 17. no.4, Machine Learning: A Multistrategy Approach, Volume IV, Michalski R.S. and Tecuci G. (Eds.), Morgan Kaufmann, San Mateo, 1994.