Download presentation
Presentation is loading. Please wait.
Published byIda Widya Johan Modified over 6 years ago
1
Learning Agents Prof. Gheorghe Tecuci Learning Agents Laboratory
Computer Science Department George Mason University
2
Instructable agents: the Disciple approach
Overview Learning strategies Instructable agents: the Disciple approach Basic bibliography
3
Learning strategies Introduction Inductive learning from examples
Deductive (explanation-based) learning Analogical learning Abductive learning Multistrategy learning
4
What is Machine Learning
Machine Learning is the domain of Artificial Intelligence which is concerned with building adaptive computer systems that are able to improve their competence and/or efficiency through learning from input data or from their own problem solving experience. What does it mean to improve competence? What does it mean to improve efficiency?
5
Two complementary dimensions for learning
Competence A system is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. Efficiency A system is improving its efficiency, if it learns to solve the problems from its area of competence faster or by using fewer resources.
6
The architecture of a learning agent
Implements a general problem solving method that uses the knowledge from the knowledge base to interpret the input and provide an appropriate output. Implements learning methods for extending and refining the knowledge base to improve agent’s competence and/or efficiency in problem solving. Learning Agent Input/ Problem Solving Engine Sensors Learning Engine User/ Environment Output/ Ontology Rules/Cases/Methods Knowledge Base Effectors Data structures that represent the objects from the application domain, general laws governing them, actions that can be performed with them, etc.
7
Learning strategies A Learning Strategy is a basic form of learning characterized by the employment of a certain type of inference (like deduction, induction or analogy) and a certain type of computational or representational mechanism (like rules, trees, neural networks, etc.). Rote learning Learning from instruction Learning from examples Explanation-based learning Conceptual clustering Quantitative discovery Abductive learning Learning by analogy Instance-based learning Case-based learning Neural networks Genetic algorithms and evolutionary computation Reinforcement learning Bayesian learning Multistrategy learning
8
Successful applications of Machine Learning
Learning to recognize spoken words (all of the most successful systems use machine learning); Learning to drive an autonomous vehicle on public highway; Learning to classify new astronomical structures (by learning regularities in a very large data base of image data); Learning to play games; Automation of knowledge acquisition from domain experts; Instructable agents.
9
government_of_Britain_1943 government_of_Britain_1943
Basic ontological elements: instances and concepts An instance is a representation of a particular entity from the application domain. A concept is a representation of a set of instances. state_government state_government government_of_US_1943 instance_of instance_of government_of_Britain_1943 government_of_US_1943 government_of_Britain_1943 “instance_of” is the relationship between an instance and the concept to which it belongs. “state_government” represents the set of all entities that are governments of states. This set includes “government_of_US_1943” and “government_of_Britain_1943” which are called positive examples. An entity which is not an instance of a concept is called a negative example of that concept.
10
democratic_government democratic_government
Concept generality A concept P is more general than another concept Q if and only if the set of instances represented by P includes the set of instances represented by Q. state_government Example: democratic_government representative_ democracy totalitarian_ government parliamentary_ democracy state_government “subconcept_of” is the relationship between a concept and a more general concept. subconcept_of democratic_government
11
A generalization hierarchy
governing_body ad_hoc_ governing_body established_ governing_body other_type_of_ governing_body state_government group_governing_body feudal_god_ king_government other_state_ government dictator other_ group_ governing_ body democratic_ government monarchy deity_figure government_ of_Italy_1943 representative_ democracy parliamentary_ democracy democratic_ council_ or_board autocratic_ leader totalitarian_ government government_ of_US_1943 government_ of_Britain_1943 chief_and_ tribal_council theocratic_ government police_ state military_ dictatorship fascist_ state religious_ dictatorship theocratic_ democracy communist_ dictatorship religious_ dictatorship government_ of_Germany_1943 government_ of_USSRy_1943
12
Learning strategies Introduction Inductive learning from Examples
Deductive (explanation-based) learning Analogical learning Abductive learning Multistrategy learning
13
Empirical inductive concept learning from examples
Illustration Given Positive examples of cups: P P Negative examples of cups: N … Learn A description of the cup concept: has-handle(x), ... Approach: Compare the positive and the negative examples of a concept, in terms of their similarities and differences, and learn the concept as a generalized description of the similarities of the positive examples. This allows the agent to recognize other entities as being instances of the learned concept.
14
The learning problem Given • a language of instances;
• a language of generalizations; • a set of positive examples (E1, ..., En) of a concept • a set of negative examples (C1, ... , Cm) of the same concept • a learning bias • other background knowledge Determine • a concept description which is a generalization of the positive examples that does not cover any of the negative examples Purpose of concept learning: Predict if an instance is an example of the learned concept.
15
Generalization and specialization rules
Indicate various generalizations of the following sentence: Students who have lived in Fairfax for 3 years. Learning a concept from examples is based on generalization and specialization rules: A generalization rule is a rule that transforms an expression into a more general expression; - A specialization rule is a rule that transforms an expression into a less general expression.
16
Turning constants into variables Climbing the generalization hierarchy
Generalization (and specialization) rules Turning constants into variables Climbing the generalization hierarchy Dropping condition Generalizing numbers Adding alternatives
17
democratic_government representative_democracy parliamentary_democracy
Climbing/descending the generalization hierarchy Generalizes an expression by replacing a concept with a more general one. democratic_government representative_democracy parliamentary_democracy The set of single state forces governed by representative democracies ?O1 is single_state_force has_as_governing_body ?O2 ?O2 is representative_democracy generalization specialization representative_democracy democratic_government democratic_government representative_democracy The set of single state forces governed by democracies ?O1 is single_state_force has_as_governing_body ?O2 ?O2 is democratic_government
18
… Basic idea of version space concept learning UB + LB + UB LB _ + UB
Consider the examples E1, … , E2 in sequence. UB + Initialize the lower bound to the first positive example (LB=E1) and the upper bound (UB) to the most general generalization of E1. LB + UB LB If the next example is a positive one, then generalize LB as little as possible to cover it. _ + UB LB If the next example is a negative one, then specialize UB as little as possible to uncover it and to remain more general than LB. _ + UB=LB … Repeat the above two steps with the rest of examples until UB=LB. This is the learned concept.
19
The candidate elimination algorithm
Initialize S to the first positive example and G to its most general generalization 2. Accept a new training instance I • If I is a positive example then - remove from G all the concepts that do not cover I; - generalize the elements in S as little as possible to cover I but remain less general than some concept in G; - keep in S the minimally general concepts. • If I is a negative example then - remove from S all the concepts that cover I; - specialize the elements in G as little as possible to uncover I and be more general than at least one element from S; - keep in G the maximally general concepts. 3. Repeat 2 until G=S and they contain a single concept C (this is the learned concept)
20
Illustration of the candidate elimination algorithm
Language of generalizations: (shape, size) shape: {ball, brick, cube, any-shape} size: {large, small, any-size} Language of instances: (shape, size) shape: {ball, brick, cube} size: {large, small} Learning process: G = {(any-shape, any-size)} S = {(ball, large)} +(ball, large) 1 -(brick, small) G = {(ball, any-size) (any-shape, large)} 2 Input examples: ball small + shape size class large brick – cube -(cube, large) G = {(ball, any-size)} 3 +(ball, small) S = {(ball, any-size)} || 4
21
General features of the empirical inductive methods
Require many examples. Do not need much domain knowledge. Improve the competence of the agent. The version space method relies on an exhaustive bi-directional search and is computationally intensive. This limits its practical applicability. Practical empirical inductive learning methods (such as ID3) rely on heuristic search to hypothesize the concept. To learn a good concept description through this learning strategy requires a very large set of positive and negative examples. On the other hand, this is the only information that the agent needs. That is, the agent does not require prior knowledge to perform this type of learning. The result of this learning strategy is the increase of the problem solving competence of the agent. Indeed, the agent will learn to do things it was not able to do before. For instance, it will learn to recognize cups, something that it was not able to do before. Most empirical inductive learning algorithms use heuristic methods to search an incredibly large space of possibilities for the learned concept. The version space method, which does not rely on heuristic search (it performs an exhaustive search), assumes that the representation language is complete, and a single concept exists, has a very important theoretical value, but a more limited practical relevance.
22
Learning strategies Introduction Inductive learning from Examples
Deductive (explanation-based) learning Analogical learning Abductive learning Multistrategy learning
23
Explanation-based learning problem
Given A training example cup(o1) Ü color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),... Learning goal A specification of the desirable features of the concept to be learned (e.g. the learned concept should have only features from the example) Background knowledge Complete knowledge that allows proving that the training example represents the concept: cup(x) Ü liftable(x), stable(x), open-vessel(x). liftable(x) Ü light(x), graspable(x). stable(x) Ü has-flat-bottom(x). … Determine A concept definition representing a deductive generalization of the training example that satisfies the learning goal. cup(x) Ü made-of(x, y),light-mat(y),has-handle(x),has-flat-bottom(x),up-concave(x). Purpose of learning Improve the problem solving efficiency of the agent.
24
Explanation-based learning method
1. Construct an explanation that proves that the training example is an example of the concept to be learned. 2. Generalize the explanation as much as possible so that the proof still holds, and extract from it a concept definition that satisfies the learning goal. A example of a cup cup(o1): color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),... cup(o1) stable(o1) liftable(o1) graspable(o1) light(o1) light-mat(plastic) made-of(o1,plastic) ... has-handle(o1) cup(x) stable(x) liftable(x) graspable(x) light(x) light-mat(y) made-of(x,y) has-handle(x) The proof identifies the characteristic features: Proof generalization generalizes them: • has-handle(o1) is needed to prove cup(o1) • color(o1,white) is not needed to prove cup(o1) • made-of(o1, plastic) is needed to prove cup(o1) • made-of(o1, plastic) is generalized to made-of(x, y); • the material needs not be plastic. The goal of this learning strategy is to improve the efficiency in problem solving. The agent is able to perform some task but in an inefficient way. We would like to teach the agent to perform the task faster. Consider, for instance, an agent that is able to recognize cups. The agent receives a description of a cup that includes many features. The agent will recognize that this object is a cup by performing a complex reasoning process, based on its prior knowledge. This process is illustrated by the proof tree from the left hand side of this slide. The object o1 is made of plastic which is a light material. Therefore o1 is light. o1 has a handle and therefore it is graspable. Being light and graspable, it is liftable. An so on … being liftable, stable and an open vessel, it is a cup. However, the agent can learn from this process to recognize a cup faster. Notice that the agent used the fact that o1 has a handle in order to prove that o1 is a cup. This means that having a handle is an important feature. On the other hand the agent did not use the the color of o1 to prove that o1 is a cup. This means that color is not important. Notice how the agent reaches the same conclusions as in learning from examples, but through a different line of reasoning, and based on a different type of information. The next step in the learning process is to generalize the tree from the left hand side into the tree from the right hand side. While the tree from the left hand side proves that the specific object o1 is a cup, the tree from the right hand side shows that any object x that is made of some light material y, has a handle and some other features is a cup. Therefore, to recognize that an object o2 is a cup, the agent only needs to look for the presence of these features discovered as important. It no longer needs to build a complex proof tree. Therefore cup recognition is done much faster. Finally, notice that the agent needs only one example to learn from. However, it needs a lot of prior knowledge to prove that this example is a cup. Providing such prior knowledge to the agent is a very complex task.
25
Discussion cup(o1) stable(o1) liftable(o1) graspable(o1) light(o1) light-mat(plastic) made-of(o1,plastic) ... has-handle(o1) cup(x) stable(x) liftable(x) graspable(x) light(x) light-mat(y) made-of(x,y) ... has-handle(x) The proof identifies the characteristic features Proof generalization generalizes them How does this learning method improve the efficiency of the problem solving process? A cup is recognized by using a single rule rather than building a proof tree. Do we need a training example to learn an operational definition of the concept? Why? The learner does not need a training example. It can simply build proof trees from top-down, starting with an abstract definition of the concept and growing the tree until the leaves are operational features. However, without a training example the learner will learn many operational definitions. The training example focuses the learner on the most typical example.
26
General features of explanation-based learning
• Needs only one example • Requires complete knowledge about the concept (which makes this learning strategy impractical). • Improves agent's efficiency in problem solving • Shows the importance of explanations in learning
27
Learning strategies Introduction Inductive learning from Examples
Deductive (explanation-based) learning Analogical learning Abductive learning Multistrategy learning
28
Learning by analogy Learning by analogy means acquiring new knowledge about an input entity by transferring it from a known similar entity. The hydrogen atom is like our solar system. The Sun has a greater mass than the Earth and attracts it, causing the Earth to revolve around the Sun. The nucleus also has a greater mass then the electron and attracts it. Therefore it is plausible that the electron also revolves around the nucleus.
29
Discussion Examples of analogies: Pressure Drop is like Voltage Drop
A variable in a programming language is like a box. Provide other examples of analogies. Which is the central intuition supporting learning by analogy: If two entities are similar in some respects then they could be similar in other respects as well.
30
Learning by analogy: the learning problem
Given: • A partially known target entity T and a goal concerning it. • Background knowledge containing known entities. Find: • New knowledge about T obtained from a source entity S belonging to the background knowledge. Partially understood structure of the hydrogen atom under study. Knowledge from different domains, including astronomy, geography, etc. In a hydrogen atom the electron revolves around the nucleus, in a similar way in which a planet revolves around the sun.
31
Learning by analogy: the learning method
• ACCESS: find a known entity that is analogous with the input entity. • MATCHING: match the two entities and hypothesize knowledge. • EVALUATION: test the hypotheses. • LEARNING: store or generalize the new knowledge. Based on what is known about the hydrogen atom and the solar system identify the solar system as a source for the hydrogen atom. One may map the nucleus to the sun and the electron to the planet, allowing one to infer that the electron revolves around the nucleus because the nucleus attracts the electron and the mass of the nucleus is greater than the mass of the electron. A specially designed experiment shows that indeed the electron revolves around the nucleus. Store that, in a hydrogen atom, the electron revolves around the nucleus. By generalization from the solar system and the hydrogen atom, learn the abstract concept that a central force can cause revolution.
32
Discussion How does analogy help?
Why not just study the structure of the hydrogen atom to discover that new knowledge? We anyway need to perform an experiment to test that the electron revolves around the hydrogen atom. How could we define problem solving by analogy? How does analogy help to solve new problems?
33
General features of analogical learning
• Requires a huge amount of background knowledge from a wide variety of domains. • Improves agent's competence through knowledge base refinement. • Reduces the solution finding to solution verification. • Is a very powerful reasoning method with incomplete knowledge
34
Learning strategies Introduction Inductive learning from Examples
Deductive (explanation-based) learning Analogical learning Abductive learning Multistrategy learning
35
Abductive learning The learning problem Illustration:
Finds the hypothesis that explains best an observation (or collection of data) and assumes it as being true. Illustration: There is smoke in the East building. Fire causes smoke. Hypothesize that there is a fire in the East building. • Let D be a collection of data • Find all the hypotheses that (causally) explain D • Find the hypothesis H that explains D better than other hypotheses • Assert that H is true The learning method
36
Abduction (cont.) Consider the observation: “University Dr. is wet”
Use abductive learning. Raining causes the streets to be wet. I hypothesize that it has rained on the University Dr. Which are other potential explanations? Provide other examples of abductive reasoning. What real world applications of abductive reasoning can you imagine?
37
Learning strategies Introduction Inductive learning from Examples
Deductive (explanation-based) learning Analogical learning Abductive learning Multistrategy learning
38
Multistrategy learning
Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated.
39
Learning from examples
Complementariness of learning strategies Learning from examples Explanation- based learning Multistrategy learning Examples many one several needed Knowledge complete incomplete knowledge very little needed Type of induction deduction induction and/ or deduction inference Effect on improves competence efficiency competence or/ and efficiency agent's behavior
40
Instructable agents: the Disciple approach
Overview Learning strategies Instructable agents: the Disciple approach Basic bibliography
41
Instructable agents: the Disciple approach
Introduction Modeling expert’s reasoning Rule learning Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
42
How are agents built and why it is hard
The knowledge engineer attempts to understand how the subject matter expert reasons and solves problems and then encodes the acquired expertise into the system's knowledge base. This modeling and representation of expert knowledge is long, painful and inefficient. Why?
43
The Disciple approach: Problem statement
Elaborate a theory and methodology for the mixed-initiative, end-to-end development of knowledge bases and agents by subject matter experts, with limited assistance from knowledge engineers. The expert teaches the agent to perform various tasks in a way that resembles how the expert would teach a person. The agent learns from the expert, building, verifying and improving its knowledge base Approach Mixed-initiative reasoning between an expert that has the knowledge to be formalized and a learning agent shell that knows how to formalize it. Interface Problem Solving Learning Ontology + Rules How is this approach addresses the knowledge acquisition bottleneck?
44
Vision on the evolution of computer systems
Mainframe Computers Personal Computers Software systems developed by computer experts and used by persons that are not computer experts Learning Agents Software systems developed and used by persons that are not computer experts Software systems developed and used by computer experts
45
General architecture of Disciple-RKF
Intelligent User Interface Teaching, Learning and Problem Solving Modeling Task Learning Scenario Elicitation Mixed Initiative Manager Ontology Editors and Browsers Mixed_Initiative Problem Solving Rule Learning Natural Language Generation Autonomous Problem Solving Rule Refinement Disciple-RKF is the most current shell from the Disciple family. Its architecture is organized into three main components: 1) the intelligent user interface, 2) the teaching, learning and problem solving component, and 3) the knowledge base management component. The intelligent user interface component allows the experts to communicate with Disciple in a manner as close as possible to the way they communicate in their environment. The teaching, learning and problem solving component is responsible for knowledge formation and problem solving, and contains specific modules for domain modeling, rule learning and refinement, as well as for mixed-initiative and autonomous problem solving. The knowledge base management component contains modules for managing the knowledge base of Disciple and for importing knowledge from external knowledge repositories. Knowledge Base Management Ontology Problem Solving Ontology Import Solving Instances Rules Instances Rules
46
Knowledge base: Object ontology + reasoning rules
A hierarchical representation of the types of objects from the COG domain. A hierarchical representation of the types of features of the objects from the COG domain.
47
Task reduction rule IF: Test whether the will of the people can make a state accept the strategic goal of an opposing force The will of the people is ?O1 The state is ?O2 The opposing force is ?O3 The goal is ?O4 Plausible Upper Bound Condition ?O1 is will_of_agent ?O2 is force has_as_people ?O6 has_as_governing_body ?O5 ?O3 is (strategic_COG_relevant_factor agent) ?O4 is force_goal ?O5 is representative_democracy has_as_will ?O7 ?O6 is people has_as_will ?O1 ?O7 is will_of_agent reflects ?O1 Plausible Lower Bound Condition ?O1 is will_of_the_people_of_US_1943 ?O2 is US_1943 has_as_people ?O6 has_as_governing_body ?O5 ?O3 is European_Axis_1943 ?O4 is Dominance_of_Europe_by_European_Axis ?O5 is government_of_US_1943 has_as_will ?O7 ?O6 is people_of_US_1945 has_as_will ?O1 ?O7 is will_of_the_government_of_US_1943 reflects ?O1 THEN: Test whether the will of the people, that controls the government, can make a state accept the strategic goal of an opposing force The will of the people is ?O1 The government is ?O5 The state is ?O2 The opposing force is ?O3 The goal is ?O4
48
Teaching of the agent by the subject matter expert.
The developed Disciple approach Modeling the problem solving process of the subject matter expert and development of the object ontology of the agent. Teaching of the agent by the subject matter expert.
49
Instructable agents: the Disciple approach
Introduction Modeling expert’s reasoning Rule learning Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
50
Application domain Identification of strategic Center of Gravity candidates in war scenarios The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed. Carl Von Clausewitz, “On War,” 1832. If a combatant eliminates or influences the enemy’s strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster.
51
… Modeling the identification of center of gravity candidates
I need to Identify and test a strategic COG candidate for the Sicily_1943 scenario What kind of scenario is Sicily_1943? Sicily_1943 is a war scenario Therefore I need to Identify and test a strategic COG candidate for Sicily_1943 which is a war scenario Which is an opposing force in the Sicily_1943 scenario? Allied_Forces_1943 Therefore I need to Identify and test a strategic COG candidate for Allied_Forces_1943 Is Allied_Forces_1943 a single-member force or a multi-member force? Allied_Forces_1943 is a multi-member force Therefore I need to … Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi-member force European_Axis_1943 Therefore I need to Identify and test a strategic COG candidate for European_Axis_1943 …
52
… I need to Test the will of the people of US_1943 which is a strategic COG candidate with respect to the people of US_1943 What is the strategic goal of European_Axis_1943? ‘Dominance of Europe by European Axis’ Therefore I need to Test whether the will of the people of US_1943 can make US_1943 accept the strategic goal of European_Axis_1943 which is ‘Dominance of Europe by European Axis’ Let us assume that the people of US_1943 would accept ‘Dominance of Europe by European Axis’. Could the people of US_1943 make the government of US_1943 accept ‘Dominance of Europe by European Axis’? Yes, because US_1943 is a representative democracy and the government of US_1943 reflects the will of the people of US_1943 Therefore I need to Test whether the will of the people of US_1943 that controls the government of US_1943 can make US_1943 accept the strategic goal of European_Axis_1943 which is ‘Dominance of Europe by European Axis’ Let us assume that the people of US_1943 would accept ‘Dominance of Europe by European Axis’. Could the people of US_1943 make the military of US_1943 accept ‘Dominance of Europe by European Axis’? Yes, because US_1943 is a representative democracy and the will of the military of US_1943 reflects the will of the people of US_1943 Therefore conclude that The will of the people of US_1943 is a strategic COG candidate that cannot be eliminated
53
Modeling advisor: helping the expert to express his reasoning
54
Instructable agents: the Disciple approach
Introduction Modeling expert’s reasoning Rule learning Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
55
Rule learning Natural Language Logic I need to IF
Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? explanation ?O1 has_as_member ?O2 US_1943 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Therefore I need to Identify and test a strategic COG candidate for US_1943 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Question Which is a member of ?O1 ? Answer ?O2 FORMAL STRUCTURE OF THE RULE THEN Identify and test a strategic COG candidate for ?O2 Rule learning INFORMAL STRUCTURE OF THE RULE
56
1. Formalize the tasks I need to I need to
Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 Therefore I need to Therefore I need to Identify and test a strategic COG candidate for US_1943 Identify and test a strategic COG candidate for a force The force is US_1943
57
Task formalization interface
58
2. Find an explanation of why the example is correct
I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? US_1943 Therefore I need to Identify and test a strategic COG candidate for US_1943 The explanation is the best possible approximation of the question and the answer, in the object ontology. has_as_member Allied_Forces_1943 US_1943
59
Explanation generation and selection interface
60
3. Generate the plausible version space rule
has_as_member Allied_Forces_1943 US_1943 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 Rewrite as explanation ?O1 has_as_member ?O2 Most general generalization Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Condition ?O1 is Allied_Forces_ has_as_member ?O2 ?O2 is US_1943 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Most specific generalization THEN Identify and test a strategic COG candidate for a force The force is ?O2 Which is the justification for these generalizations?
61
Analogical reasoning ?O2 has_as_ member ?O1 force multi_member_force
similar explanation less general than Analogy criterion ?O2 has_as_ member ?O1 force multi_member_force instance_of Germany_1943 European_Axis_1943 similar initial example explanation explains Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 US_1943 has_as_ member Allied_Forces_1943 similar example explains? similar Identify and test a strategic COG candidate for a force The force is Germany_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is European_Axis_1943
62
Generalization by analogy
explains ?O2 has_as_ member ?O1 force multi_member_force instance_of Identify and test a strategic COG candidate for a force The force is ?O2 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 initial example explains Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 US_1943 has_as_ member Allied_Forces_1943 Any value of ?O1 should be an instance of: DOMAIN(has_as_member) RANGE(The force is) = multi_member_force force = multi_member_force Any value of ?O2 should be an instance of: RANGE(has_as_member) RANGE(The force is) = force force = force Knowledge-base constraints on the generalization:
63
Instructable agents: the Disciple approach
Introduction Modeling expert’s reasoning Rule learning Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
64
Mixed-Initiative Problem Solving
Control of modeling, learning and solving Input Task Mixed-Initiative Problem Solving Ontology + Rules Generated Reduction New Reduction Accept Reduction Reject Reduction This slide shows the interaction between the expert and the agent when the agent has already learned some rules. This interaction is governed by the mixed-initiative problem solver. The expert formulates the initial task. Then the agent attempts to reduce this task by using the previously learned rules. Let us assume that the agent succeeded to propose a reduction to the current task. The expert has to accept it if it is correct, or he has to reject it, if it is incorrect. If the reduction proposed by the agent is accepted by the expert, the rule that generated it and its component tasks are generalized. Then the process resumes, the agent attempting to reduce the new task. If the reduction proposed by the agent is rejected, then the agent will have to specialize the rule, and possibly its component tasks. In this case the expert will have to indicate the correct reduction, going through the normal steps of modeling, formalization, and learning. Similarly, when the agent cannot propose a reduction of the current task, the expert will have to indicate it, again going through the steps of modeling, formalization and learning. The control of this interaction is done by the mixed-initiative problem solver tool. Solution Rule Refinement Task Refinement Modeling Rule Refinement Formalization Learning
65
… 2 1 3 5 4 Learns Provides an example Applies Refines ?
US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Provides an example 1 Rule_15 Learns 2 I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 … Rule_15 ? Applies Germany_1943 Identify and test a strategic COG candidate for Germany_1943 Which is a member of European_Axis_1943? Therefore I need to 3 Rule_15 Refines 5 Accepts the example 4
66
Rule refinement with a positive example
Positive example that satisfies the upper bound IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 explanation ?O1 has_as_member ?O2 Therefore I need to Identify and test a strategic COG candidate for Germany_1943 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Condition satisfied by positive example ?O1 is European_Axis_ has_as_member ?O3 ?O2 is Germany_1943 less general than Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation European_Axis_1943 has_as_member Germany_1943 THEN Identify and test a strategic COG candidate for a force The force is ?O2
67
Minimal generalization of the plausible lower bound
Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force less general than (or at most as general as) New Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force minimal generalization Plausible Lower Bound Condition (from rule) ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Condition satisfied by the positive example ?O1 is European_Axis_ has_as_member ?O2 ?O2 is Germany_1943
68
Forces hierarchy … multi_state_alliance
composition_of_forces force single_member_force multi_member_force single_state_force single_group_force multi_state_force multi_group_force … US_1943 Germany_1943 multi_state_alliance multi_state_coalition dominant_partner_ multi_state_alliance dominant_partner_ multi_state_coalition multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943 equal_partners_ multi_state_alliance equal_partners_ multi_state_coalition European_Axis_1943 Allied_Forces_1943
69
Refined rule less general than IF
Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 explanation ?O1 has_as_member ?O2 explanation ?O1 has_as_member ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force less general than Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force THEN Identify and test a strategic COG candidate for a force The force is ?O2 THEN Identify and test a strategic COG candidate for a force The force is ?O2
70
Which are some learning strategies used by Disciple?
Rule learning Rule refinement
71
Instructable agents: the Disciple approach
Introduction Modeling expert’s reasoning Rule learning Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
72
Use of Disciple in Army War College courses
319jw Case Studies in Center of Gravity Analysis (COG), Term II and Term III Students develop input scenarios Students use Disciple as an intelligent assistant that supports them to develop a Center of Gravity analysis report for a war scenario. 589 Military Applications of Artificial Intelligence (MAAI) ,Term III Students develop agents Students act as subject matter experts that teach personal Disciple agents their own reasoning in Center of Gravity analysis.
73
COG Winter 2002: Expert assessments
The use of Disciple is an assignment that is well suited to the course's learning objectives Disciple helped me to learn to perform a strategic center of gravity analysis of a scenario The use of Disciple was a useful learning experience Disciple should be used in future versions of this course
74
589 MAAI Spring 2002: SME assessment
I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer
75
Instructable agents: the Disciple approach
Introduction Modeling expert’s reasoning Rule learning Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
76
Modeling, problem solving, and learning as collaborative agents
Rules similar with the current example Rule Analogy Engine Explanation Generator New correct examples for learning Rule Analyzer Learning Partially learned rules with informal and formal descriptions Example Composer Mixed Modeling New positive and negative examples for rule refinement Example Analyzer Initiative Problem Solving Analysis of the instantiations for a rule Informal description of the reasoning Mixed-Initiative PS New situations that need to be modeled Reasoning Tree Analyzer Solution Analyzer Reasoning tree similar with the current model
77
Question-answering based task reduction
A complex problem solving task is performed by: successively reducing it to simpler tasks; finding the solutions of the simplest tasks; successively composing these solutions until the solution to the initial task is obtained. S1 Q1 … A11 S11 A1n S1n T1n T11a S11a T11b S11b … Q11b S11b … A11b1 S11b1 … A11bm S11bm T11b1 T11bm Let T1 be the problem solving task to be performed. Finding a solution is an iterative process where, at each step, we consider some relevant information that leads us to reduce the current task to a simpler task or to several simpler tasks. The question Q associated with the current task identifies the type of information to be considered. The answer A identifies that piece of information and leads us to the reduction of the current task.
78
Intelligence Analysis
79
PERSONALIZED LEARNING ASSISTANT Information- Interaction
A possible multi-agent architecture for Disciple PERSONALIZED LEARNING ASSISTANT EXPERT Information- Interaction Agent Support Agent Support Agent Support Agent Local Shared KB Global KB Expert’s Model Ontology Rules KB Awareness Agent External- Expertise Agent KB KB KB KB EXPERT PERSONALIZED LEARNING ASSISTANT
80
Bibliography Mitchell T.M., Machine Learning, McGraw Hill, 1997.
Shavlik J.W. and Dietterich T. (Eds.), Readings in Machine Learning, Morgan Kaufmann, Buchanan B., Wilkins D. (Eds.), Readings in Knowledge Acquisition and Learning: Automating the Construction and the Improvement of Programs, Morgan Kaufmann, 1992. Langley P., Elements of Machine Learning, Morgan Kaufmann, 1996. Michalski R.S., Carbonell J.G., Mitchell T.M. (Eds), Machine Learning: An Artificial Intelligence Approach, Morgan Kaufmann, 1983 (Vol. 1), 1986 (Vol. 2). Kodratoff Y. and Michalski R.S. (Eds.) Machine Learning: An Artificial Intelligence Approach (Vol. 3), Morgan Kaufmann Publishers, Inc., 1990. Michalski R.S. and Tecuci G. (Eds.), Machine Learning: A Multistrategy Approach (Vol. 4), Morgan Kaufmann Publishers, San Mateo, CA, 1994. Tecuci G. and Kodratoff Y. (Eds.), Machine Learning and Knowledge Acquisition: Integrated Approaches, Academic Press, 1995. Tecuci G., Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies, Academic Press, 1998.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.