Presentation is loading. Please wait.

Presentation is loading. Please wait.

 G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University Gheorghe Tecuci

Similar presentations


Presentation on theme: " G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University Gheorghe Tecuci"— Presentation transcript:

1  G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University Gheorghe Tecuci tecuci@cs.gmu.edu http://lalab.gmu.edu/ CS 785, Fall 2001

2  G.Tecuci, Learning Agents Laboratory Overview Rule learning Agent teaching: Hands-on experience Illustration of rule learning in other domains Required reading A gentle introduction to Machine Learning

3  G.Tecuci, Learning Agents Laboratory A gentle introduction to Machine Learning Empirical inductive learning from examples Explanation-based learning Learning by analogy Abductive learning What is Learning? Multistrategy learning

4  G.Tecuci, Learning Agents Laboratory What is Learning? (1)acquire and organize knowledge (by building, modifying and organizing internal representations of some external reality), (2)discover new knowledge and theories (by creating hypotheses that explain some data or phenomena), or (3)acquire skills (by gradually improving their motor or cognitive skills through repeated practice, sometimes involving little or no conscious thought). Learning results in changes in the agent (or mind) that improve its competence and/or efficiency. Learning is a very general term denoting the way in which people and computers:

5  G.Tecuci, Learning Agents Laboratory Representative learning strategies Rote learning Learning from instruction Learning from examples Explanation-based learning Conceptual clustering Quantitative discovery Abductive learning Learning by analogy Instance-based learning Reinforcement learning Neural networks Genetic algorithms and evolutionary computation Reinforcement learning Bayesian Learning Multistrategy learning

6  G.Tecuci, Learning Agents Laboratory Empirical inductive learning from examples The learning problem Given a set of positive examples (E1,..., En) of a concept a set of negative examples (C1,..., Cm) of the same concept a learning bias other background knowledge Determine a concept description which is a generalization of the positive examples that does not cover any of the negative examples Purpose of concept learning: Predict if an instance is an example of the learned concept.

7  G.Tecuci, Learning Agents Laboratory Compares the positive and the negative examples of a concept, in terms of their similarities and differences, and learns the concept as a generalized description of the similarities of the positive examples. This allows the agent to recognize other entities as being instances of the learned concept. General approach Illustration: Positive examples of cups: P1 P2... Negative examples of cups: N1 … Description of the cup concept: has-handle(x),...

8  G.Tecuci, Learning Agents Laboratory The concept learning problem (illustration) Positive examples: Negative examples: Concept1: COA511ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 10 COA51ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 5 COA31ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 0 COA61ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 30 ?O1ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION ?N1 ?N1IS-IN[5 … 10] Cautious learner

9  G.Tecuci, Learning Agents Laboratory The concept learning problem (illustration) Positive examples: Negative examples: Concept2: COA511ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 10 COA51ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 5 COA31ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 0 COA61ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION 30 ?O1ISCOA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OFFENSIVE-ACTIONS-FOR-MISSION ?N1 ?N1IS-IN[1 … 29] Aggressive learner

10  G.Tecuci, Learning Agents Laboratory Basic idea of version space concept learning UB + Initialize the lower bound to the first positive example (LB=E1) and the upper bound (UB) to the most general generalization of E1. _ + + + + LB UB LB _ + + UB=LB _ _ + + … If the next example is a positive one, then generalize LB as little as possible to cover it. If the next example is a negative one, then specialize UB as little as possible to uncover it and to remain more general than LB. Repeat the above two steps with the rest of examples until UB=LB. This is the learned concept. Consider the examples E1, …, E2 in sequence.

11  G.Tecuci, Learning Agents Laboratory Generalization with a positive example A difficulty in learning is that there are many ways in which a concept can be generalized to cover a new positive example. ((?x yellow) (2 green))((1 green) (2 yellow)) concept +

12  G.Tecuci, Learning Agents Laboratory Another difficulty in learning is that there are many ways in which a concept can be specialized to uncover a negative example. ((?x yellow) (2 green))((1 green) (2 yellow)) concept _ Specialization with a negative example

13  G.Tecuci, Learning Agents Laboratory The Version Space learning method (Mitchell, 1978) more general UB LB more specific Because there are many ways to generalize or to specialize concepts based on the examples, LB and UB are not single elements but sets of elements. The sets LB and UB are very large for complex problems. The method requires an exhaustive set of examples.

14  G.Tecuci, Learning Agents Laboratory The version space method (cont.) The learning method 1.Initialize S to the first positive example and G to its most general generalization 2. Accept a new training instance I If I is a positive example then - remove from G all the concepts that do not cover I - generalize the elements in S as little as possible to cover I If I is a negative example then - remove from S all the concepts that cover I - specialize the elements in G as little as possible to uncover I and be more general than at least one element from S. 3. Repeat 2 until G=S and that contains a single concept description.

15  G.Tecuci, Learning Agents Laboratory Illustration of the version space learning method

16  G.Tecuci, Learning Agents Laboratory General features of the empirical inductive methods Require many examples Do not need much domain knowledge Improve the competence of the agent The version space method is too computationally intensive to have practical applications Practical empirical inductive learning methods (such as ID3) rely on heuristic search to hypothesize the concept.

17  G.Tecuci, Learning Agents Laboratory Explanation-based learning (EBL) The EBL problem Given A concept example cup(o1)  color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),... Goal: the learned concept should have only operational features (e.g. features present in the examples) BK: cup(x)  liftable(x), stable(x), open-vessel(x). liftable(x)  light(x), graspable(x). stable(x)  has-flat-bottom(x).... Determine An operational concept definition cup(x)  made-of(x, y), light-mat(y), has-handle(x), has-flat-bottom(x), up-concave(x).

18  G.Tecuci, Learning Agents Laboratory Explanation-based learning (cont.) A example of a cup cup(o1): color(o1, white), made-of(o1, plastic), light-mat(plastic), has- handle(o1), has-flat-bottom(o1), up-concave(o1),... Learns to recognize more efficiently the examples of a concept by proving that a specific instance is an example of it, and thus identifying the characteristic features of the concept. cup(o1) stable(o1)liftable(o1) graspable(o1) light(o1) light-mat(plastic) made-of(o1,plastic)... has-handle(o1) cup(x) stable(x) liftable(x) graspable(x) light(x) light-mat(y) made-of(x,y)... has-handle(x) The proof identifies the characteristic features: Proof generalization generalizes them: has-handle(o1) is needed to prove cup(o1) color(o1,white) is not needed to prove cup(o1) made-of(o1, plastic) is needed to prove cup(o1) made-of(o1, plastic) is generalized to made-of(x, y); the material needs not be plastic.

19  G.Tecuci, Learning Agents Laboratory Needs only one example Requires complete knowledge about the concept (which makes this learning strategy impractical). Improves agent's efficiency in problem solving General features of Explanation-based learning

20  G.Tecuci, Learning Agents Laboratory Learning by analogy ACCESS: find a known entity that is analogous with the input entity. MATCHING: match the two entities and hypothesize knowledge. EVALUATION: test the hypotheses. LEARNING: store or generalize the new knowledge. Learns new knowledge about an input entity by transferring it from a known similar entity. The learning problem The learning method

21  G.Tecuci, Learning Agents Laboratory Learning by analogy: illustration Illustration: The hydrogen atom is like our solar system.

22  G.Tecuci, Learning Agents Laboratory Learning by analogy: illustration Illustration: The hydrogen atom is like our solar system. The Sun has a greater mass than the Earth and attracts it, causing the Earth to revolve around the Sun. The nucleus also has a greater mass then the electron and attracts it. Therefore it is plausible that the electron also revolves around the nucleus. General idea of analogical transfer: similar causes have similar effects.

23  G.Tecuci, Learning Agents Laboratory Abductive learning Let D be a collection of data Find all the hypotheses that (causally) explain D Find the hypothesis H that explains D better than other hypotheses Assert that H is true Illustration: There is smoke in the East building. Fire causes smoke. Hypothesize that there is a fire in the East building. Finds the hypothesis that explains best an observation (or collection of data) and assumes it as being true. The learning problem The learning method

24  G.Tecuci, Learning Agents Laboratory Multistrategy learning Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated.

25  G.Tecuci, Learning Agents Laboratory Examples Learning from examples Explanation- based learning Multistrategy Knowledge many one complete several incomplete knowledge learning needed very little Effect on agent's behavior improves competence improves efficiency improves competence or/ and efficiency Type of inference induction deduction induction and/ or deduction Complementariness of learning strategies knowledge needed

26  G.Tecuci, Learning Agents Laboratory Overview Rule learning Agent teaching: Hands-on experience Illustration of rule learning in other domains Required reading A gentle introduction to Machine Learning

27  G.Tecuci, Learning Agents Laboratory Explanation of an example Rule generalization by analogy General presentation of the rule learning method Task formalization Rule learning Characterization of the learned rule More on explanation generation by analogy Hint-based explanation generation

28  G.Tecuci, Learning Agents Laboratory The rule learning problem: definition GIVEN: an example of a problem solving episode; a knowledge base that includes an object ontology and a set of problem solving rules; an expert that understands why the given example is correct and may answer agent’s questions. DETERMINE: a plausible version space rule that is an analogy-based generalization of the specific problem solving episode.

29  G.Tecuci, Learning Agents Laboratory The rule learning problem: input example IF the task to accomplish is THEN industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943 Identify the strategic COG candidates with respect to the industrial civilization of US_1943 Who or what is a strategically critical industrial civilization element in US_1943? Industrial_capacity_of_US_1943

30  G.Tecuci, Learning Agents Laboratory IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 Plausible Upper Bound Condition ?O1ISForce has_as_industrial_factor ?O2 ?O2ISIndustrial_factor is_a_major_generator_of ?O3 ?O3ISProduct Plausible Lower Bound Condition ?O1ISUS_1943 has_as_industrial_factor ?O2 ?O2ISIndustrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3ISWar_materiel_and_transports_of_US_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 The rule learning problem: Learned PVS rule IF Identify the strategic COG candidates with respect to the industrial civilization of ?O1 Question Who or what is a strategically critical industrial civilization element in ?O1 ? Answer ?O2 THEN ?O2 is a strategic COG candidate for ?O1 INFORMAL STRUCTURE OF THE RULE FORMAL STRUCTURE OF THE RULE

31  G.Tecuci, Learning Agents Laboratory Basic steps of the rule learning method 3. Rewrite the example and the explanation into a very specific rule. 4. Generalize the condition of the specific rule and generate a plausible version space rule. 1. Formalize and learn the tasks 2. Find a formal explanation of why the example is correct. This explanation is the best possible approximation of the question and the answer, in the object ontology. 5. Refine the rule by learning from additional positive and negative examples (according to the rule refinement method).

32  G.Tecuci, Learning Agents Laboratory 1. Formalize the tasks IF the task to accomplish is THEN A strategic COG relevant factor is strategic COG candidate for a force The force is US_1943 The strategic COG relevant factor is Industrial_capacity_of_US_1943 Identify the strategic COG candidates with respect to the industrial civilization of a force The force is US_1943 IF the task to accomplish is THEN industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943 Identify the strategic COG candidates with respect to the industrial civilization of US_1943 Who or what is a strategically critical industrial civilization element in US_1943? Industrial_capacity_of_US_1943

33  G.Tecuci, Learning Agents Laboratory Task learning Identify the strategic COG candidates with respect to the industrial civilization of a force The force is US_1943 Plausible upper condition ?O1 IS Force Plausible lower bound condition ?O1 IS US_1943 Force Opposing_force Single_state_force Multi_state_force Anglo_allies_1943 US_1943 Britain_1943 component_ state instance_of subconcept_of instance_of Multi_state_alliance Equal_partners_ multi_state_ alliance subconcept_of Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 FORMAL STRUCTURE OF THE TASK Identify the strategic COG candidates with respect to the civilization of ?O2 which is a member of ?O1 Identify the strategic COG candidates with respect to the industrial civilization of US_1943 INFORMAL STRUCTURE OF THE TASK

34  G.Tecuci, Learning Agents Laboratory 2. Find an explanation of why the example is correct IF the task to accomplish is THEN industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943 Identify the strategic COG candidates with respect to the industrial civilization of US_1943 Who or what is a strategically critical industrial civilization element in US_1943? Industrial_capacity_of_US_1943 US_1943 has_as_industrial_factor Industrial_capacity_of_US_1943 War_materiel_and_transports_of_US_1943 is_a_major_generator_of The explanation is the best possible approximation of the question and the answer, in the object ontology. These facts justify why the industrial capacity is a strategically critical element

35  G.Tecuci, Learning Agents Laboratory 3. Rewrite the example and the explanation as a specific rule IF the task to accomplish is THEN Because US_1943 has_as_industrial_factor Industrial_capacity_of_US_1943 is_a_major_generator_of War_materiel_and_transports_of_US_1943 IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 Condition ?O1ISUS_1943 has_as_industrial_factor ?O2 ?O2ISIndustrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3ISWar_materiel_and_transports_of_US_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Identify the strategic COG candidates with respect to the industrial civilization of a force The force is US_1943 A strategic COG relevant factor is strategic COG candidate for a force The force is US_1943 The strategic COG relevant factor is Industrial_capacity_of_US_1943

36  G.Tecuci, Learning Agents Laboratory 4. Generalize the condition and generate the PVS rule IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 Plausible Upper Bound Condition ?O1ISForce has_as_industrial_factor ?O2 ?O2ISIndustrial_factor is_a_major_generator_of ?O3 ?O3ISProduct Plausible Lower Bound Condition ?O1ISUS_1943 has_as_industrial_factor ?O2 ?O2ISIndustrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3ISWar_materiel_and_transports_of_US_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Condition ?O1ISUS_1943 has_as_industrial_factor ?O2 ?O2ISIndustrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3ISWar_materiel_and_transports_of_US_1943 generalization

37  G.Tecuci, Learning Agents Laboratory 5. Refine the rule Refine the rule by learning from additional positive and negative examples (according to the rule refinement method to be presented in the next session). During rule refinement the two conditions will converge toward one another.

38  G.Tecuci, Learning Agents Laboratory Explanation of an example Rule generalization by analogy General presentation of the rule learning method Task formalization Rule learning Characterization of the learned rule More on explanation generation by analogy Hint-based explanation generation

39  G.Tecuci, Learning Agents Laboratory The task formalization method Identify the strategic COG candidates with respect to the controlling element of Britain_1943 which is a member of Anglo_allies_1943 Identify the strategic COG candidates with respect to the controlling element of a state which is a member of an opposing force The opposing force is Anglo_allies_1943 The state is Britain_1943 Task nameTask features Sample formalization rule: obtain the task name by replacing each specific instance with a more general concept for each replaced instance define a task feature of the form “The concept is instance”

40  G.Tecuci, Learning Agents Laboratory The task formalization method (cont.) Identify the strategic COG candidates with respect to the controlling element of Britain_1943 which is a member of Anglo_allies_1943 Identify the strategic COG candidates corresponding to the controlling element of a member of an opposing force The force is Anglo_allies_1943 The member is Britain_1943 Task nameTask features Any other formalization is acceptable if: the task name does not contain any instance or constant; each instance from the informal task appears in a feature of the formalized task.

41  G.Tecuci, Learning Agents Laboratory Identify the strategic COG candidates for the Sicily_1943 scenario Anglo_allies_1943 Identify the strategic COG candidates for Anglo_allies_1943 Which is an opposing force in the Sicily_1943 scenario? Is Anglo_allies_1943 a single member force or a multi-member force? Anglo_allies_1943 is a multi-member force Identify the strategic COG candidates for the Anglo_allies_1943 which is a multi-member force Sample task formalizations I need to Therefore I need to Identify the strategic COG candidates for a scenario The scenario is Sicily_1943 Identify the strategic COG candidates for an opposing force The opposing force is Anglo_allies_1943 Identify the strategic COG candidates for an opposing force which is a multi-member force The opposing force is Anglo_allies_1943

42  G.Tecuci, Learning Agents Laboratory Explanation of an example Rule generalization by analogy General presentation of the rule learning method Task formalization Rule learning Characterization of the learned rule More on explanation generation by analogy Hint-based explanation generation

43  G.Tecuci, Learning Agents Laboratory Explanation of the example Identify the strategic COG candidates with respect to the industrial civilization of a force The force is US_1943 A strategic COG relevant factor is strategic COG candidate for a force The force is US_1943 The strategic COG relevant factor is Industrial_capacity_of_US_1943 explanation: US_1943 has_as_industrial_factor Industrial_capacity_of_US_1943 Industrial_capacity_of_US_1943 is_a_major_generator_of War_materiel_and_transports_of_US_1943 Natural LanguageLogic Who or what is a strategically critical industrial civilization element in US_1943? Industrial_capacity_of_US_1943 industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943 Identify the strategic COG candidates with respect to the industrial civilization of US_1943

44  G.Tecuci, Learning Agents Laboratory What is the form of the explanation? The explanation is a sequence of object relationships that correspond to fragments of the object ontology. explanation: US_1943 has_as_industrial_factor Industrial_capacity_of_US_1943 Industrial_capacity_of_US_1943 is_a_major_generator_of War_materiel_and_transports_of_US_1943 US_1943 has_as_industrial_factor Industrial_capacity_of_US_1943 War_materiel_and_transports_of_US_1943 is_a_major_generator_of

45  G.Tecuci, Learning Agents Laboratory IF THEN Automatic generation of plausible explanations industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943 Identify the strategic COG candidates with respect to the industrial civilization of US_1943 Who or what is a strategically critical industrial civilization element in US_1943? Industrial_capacity_of_US_1943 The agent uses analogical reasoning heuristics and general heuristics to generate a list of plausible explanations from which the expert has to select the correct ones: US_1943 --- has_as_industrial_factor  Industrial_capacity_of_US_1943 US_1943  component_state ---Anglo_Allies_1943

46  G.Tecuci, Learning Agents Laboratory Analogical reasoning heuristic: Look for a rule R k that reduce the current task T 1. Extract the explanations E g from the rule R k. Look for explanations of the current task reduction that are similar with E g. Sample strategies for explanation generation General heuristics: Look for the relationships between the objects from the question and the answer Look for the relationships between an object from the IF task and an object from the question or the answer

47  G.Tecuci, Learning Agents Laboratory Explanation of an example Rule generalization by analogy General presentation of the rule learning method Task formalization Rule learning Characterization of the learned rule More on explanation generation by analogy Hint-based explanation generation

48  G.Tecuci, Learning Agents Laboratory War_materiel_and_transports_of_US_1943 … IF THEN User hint: selecting an object from the example industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943 Identify the strategic COG candidates with respect to the industrial civilization of US_1943 Who or what is a strategically critical industrial civilization element in US_1943? Industrial_capacity_of_US_1943 The expert selects an object from the example. The agent generates a list of plausible explanations containing that object. The expert selects the correct explanation(s). Industrial_capacity_of_US_1943 IS … Industrial_capacity_of_US_1943  has_as_industrial_factor --- US_1943 … Industrial_capacity_of_US_1943 -- is_a_major_generator_of  Industrial_capacity_of_US_1943 Hint:

49  G.Tecuci, Learning Agents Laboratory IF THEN Hint refinement Is Anglo_allies_1943 a single member force or a multi-member force? Anglo_allies_1943 is a multi-member force Identify the strategic COG candidates for Anglo_allies_1943 Identify the strategic COG candidates for the Anglo_allies_1943 which is a multi-member force The expert provides a hint: Anglo_allies_1943 The agent generates a list of plausible explanations containing that object. The expert selects one expression that ends in “…” and clicks on EXPAND: Anglo_allies_1943 IS … EXPAND Anglo_allies_1943 --- component_state  US_1943 … The agent generates the list of possible expansions of the expression and the experts select the explanation: Anglo_allies_1943 IS Strategic_COG_relevant_factor Anglo_allies_1943 IS Force Anglo_allies_1943 IS Military_factor Anglo_allies_1943 IS Multi_member_force Anglo_allies_1943 IS Multi_state_force Anglo_allies_1943 IS Multi_state_alliance Anglo_allies_1943 IS Equal_partners_multi_state_alliance

50  G.Tecuci, Learning Agents Laboratory Hint refinement (another example) Britain_1943 has_as_civilization Civilization_of_Britain_1943 Britain_1943 Anglo_allies_1943... component_state Governing_body_of_Britain_1943... has_as_governing_body Britain_1943... has_as_ governing_body Governing_body_ of_Britain_1943 has_as_dominant_ psychosocial_factor “the will of the people” has_as_political_leader PM_Churchill has_as_ruling_political_party Conservative_party Hint: Generate:... Select and expand: Select explanation: -- has_as_dominant_psychosocial_factor  “will of the people” Britain_1943 -- has_as_governing_body  governing_body_of_Britain_1943 --...

51  G.Tecuci, Learning Agents Laboratory No explanation necessary IF THEN What type of strategic COG candidates should I consider for Britain_1943? I consider strategic COG candidates with respect to the governing element of Britain_1943 Identify the strategic COG candidates for Britain_1943, a member of Anglo_allies_1943 Identify the strategic COG candidates with respect to the governing element of Britain_1943, a member of Anglo_allies_1943 Sometimes no formal explanation is necessary. In this example, for instance, each time I want to identify the strategic COG candidate for a state, such as Britain, I would like to also consider the candidates with respect the governing element of this state. We need to invoke Rule Learning, but then quit it without selecting any explanation. The agent will generalize this example to a rule.

52  G.Tecuci, Learning Agents Laboratory Explanation of an example Rule generalization by analogy General presentation of the rule learning method Task formalization Rule learning Characterization of the learned rule More on explanation generation by analogy Hint-based explanation generation

53  G.Tecuci, Learning Agents Laboratory Analogical reasoning similar similar explanation less general than Analogy criterion less general than US_1943 has_as_industrial_factor Industrial_capacity_of_US_1943 Identify the strategic COG candidates with respect to the industrial civilization of a force The force is US_1943 A strategic COG relevant factor is strategic COG candidate for a force The force is US_1943 The strategic COG relevant factor is Industrial_capacity_of_US_1943 IF the task to accomplish is THEN initial example explanation explains similar example explains? Identify the strategic COG candidates with respect to the industrial civilization of a force The force is Germany_1943 IF the task to accomplish is THEN War_materiel_and_transports_of_US_1943 is_a_major_generator_of Germany_1943 has_as_industrial_factor Industrial_capacity_of_Germany_1943 War_materiel_and_fuel_of_Germany_1943 is_a_major_generator_of similar ?O1 has_as_industrial_factor ?O2 ?O3 is_a_major_generator_of Force Industrial_factor Product A strategic COG relevant factor is strategic COG candidate for a force The force is Germany_1943 The strategic COG relevant factor is Industrial_capacity_of_Germany_1943

54  G.Tecuci, Learning Agents Laboratory Generalization by analogy generalization Any value of ?O1 should be an instance of: DOMAIN(has_as_industrial_factor)  RANGE(The force is ?O1) = Force Any value of ?O2 should be an instance of: RANGE(has_as_industrial_factor)  DOMAIN(is_a_major_generator_of) = Industrial_factor  Economic_factor = Industrial_factor Any value of ?O3 should be an instance of: RANGE(is_a_major_generator_of) = Product Knowledge-base constraints on the generalization: US_1943 has_as_industrial_factor Industrial_capacity_of_US_1943 Identify the strategic COG candidates with respect to the industrial civilization of a force The force is US_1943 A strategic COG relevant factor is strategic COG candidate for a force The force is US_1943 The strategic COG relevant factor is Industrial_capacity_of_US_1943 IF the task to accomplish is THEN explains War_materiel_and_transports_of_US_1943 is_a_major_generator_of ?O1 has_as_industrial_factor ?O2 ?O3 is_a_major_generator_of Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 IF the task to accomplish is THEN explains Force Industrial_factor Product

55  G.Tecuci, Learning Agents Laboratory Explanation of an example Rule generalization by analogy General presentation of the rule learning method Task formalization Rule learning Characterization of the learned rule More on explanation generation by analogy Hint-based explanation generation

56  G.Tecuci, Learning Agents Laboratory Learned plausible version space rule IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 Plausible Upper Bound Condition ?O1ISForce has_as_industrial_factor ?O2 ?O2ISIndustrial_factor is_a_major_generator_of ?O3 ?O3ISProduct Plausible Lower Bound Condition ?O1ISUS_1943 has_as_industrial_factor ?O2 ?O2ISIndustrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3ISWar_materiel_and_transports_of_US_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 IF Identify the strategic COG candidates with respect to the industrial civilization of ?O1 Question Who or what is a strategically critical industrial civilization element in ?O1 ? Answer ?O2 THEN ?O2 is a strategic COG candidate for ?O1 INFORMAL STRUCTURE OF THE RULE FORMAL STRUCTURE OF THE RULE

57  G.Tecuci, Learning Agents Laboratory Universe of Instances EhEh Plausible Upper Bound Condition Plausible Lower Bound Condition Characterization of the learned rule

58  G.Tecuci, Learning Agents Laboratory Explanation of an example Rule generalization by analogy General presentation of the rule learning method Task formalization Rule learning Characterization of the learned rule More on explanation generation by analogy Hint-based explanation generation

59  G.Tecuci, Learning Agents Laboratory Analogical reasoning heuristic 1. Look for a rule R k that reduce the current task T 1. 2. Extract the explanations E g from the rule R k. 3. Look for explanations of the current task reduction that are similar with E g. Example to be explained: IF the task to accomplish is T 1g Explanation E g PUB condition PLB condition THENaccomplish T 11g …T 1ng IF the task to accomplish is T 1 THENaccomplish T 1a,…T 1d Previously learned rule R k : Look for explanations that are similar with E g

60  G.Tecuci, Learning Agents Laboratory Justification of the heuristic This heuristic is based on the observation that the explanations of the alternative reductions of a task tend to have similar structures. The same factors are considered, but the relationships between them are different. T1T1 T 1a T 1b T 1e AbAb AaAa AeAe Question: Answers: Explanations: EbEb EaEa EeEe Q

61  G.Tecuci, Learning Agents Laboratory Illustration of the heuristic in the COA domain ASSESS-MASS-FOR-COA-WITH-DECISIVE-POINT FOR-COA?O1 FOR-DECISIVE-POINT?O2 ASSESS-MASS-WHEN-MAIN-EFFORT-ACTS-ON- DECISIVE-POINT-WITH-POOR-FORCE-RATIO FOR-COA?O1 FOR-DECISIVE-POINT?O2 FOR-UNIT?O3 FOR-EFFORT?O4 FOR-FORCE-RATIO?N1 FOR-RECOMMENDED-FORCE-RATIO ?N2 Explanations: ?O3 ASSIGNMENT ?O4 ?O3 TASK ?O5 OBJECT-ACTED-ON ?O2 ?O5 RECOMMENDED-FORCE-RATIO ?N2 ?O5 FORCE-RATIO ?N1 < ?N2 ASSESS-MASS-WHEN-MAIN-EFFORT-ACTS-ON- DECISIVE-POINT-WITH-GOOD-FORCE-RATIO FOR-COA?O1 FOR-DECISIVE-POINT?O2 FOR-UNIT?O3 FOR-EFFORT?O4 FOR-FORCE-RATIO?N1 FOR-RECOMMENDED-FORCE-RATIO ?N2 Explanations: ?O3 ASSIGNMENT ?O4 ?O3 TASK ?O5 OBJECT-ACTED-ON ?O2 ?O5 RECOMMENDED-FORCE-RATIO ?N2 ?O5 FORCE-RATIO ?N1 >= ?N2 Does the main effort act on ?O2 with an adequate force ratio? Yes, it acts with a force ratio of ?N1 No, it acts with a force ratio of ?N1

62  G.Tecuci, Learning Agents Laboratory Illustration (cont.) ASSESS-MASS-FOR-COA-WITH-DECISIVE-POINT FOR-COACOA411 FOR-DECISIVE-POINTRED-MECH-COMPANY4 ASSESS-MASS-WHEN-MAIN-EFFORT-ACTS-ON- DECISIVE-POINT-WITH-POOR-FORCE-RATIO FOR-COA COA411 FOR-DECISIVE-POINT RED-MECH-COMPANY4 FOR-UNIT BLUE-TASK-FORCE1 FOR-EFFORT MAIN-EFFORT1 FOR-FORCE-RATIO 2.0 FOR-RECOMMENDED-FORCE-RATIO3.0 IF the task to accomplish is: THEN accomplish the task: Rule: R$AMFCWDP-001 IF the task to accomplish is ASSESS-MASS-FOR-COA-WITH-DECISIVE-POINT FOR-COA?O1 FOR-DECISIVE-POINT?O2 Question: … Answer: … Explanations: ?O3 ASSIGNMENT ?O4 ?O3 TASK ?O5 OBJECT-ACTED-ON ?O2 ?O5 RECOMMENDED-FORCE-RATIO ?N2 ?O5 FORCE-RATIO ?N1 >= ?N2 THEN accomplish the task ASSESS-MASS-WHEN-MAIN-EFFORT-ACTS-ON- DECISIVE-POINT-WITH-GOOD-FORCE-RATIO FOR-COA?O1 FOR-DECISIVE-POINT?O2 FOR-UNIT?O3 FOR-EFFORT?O4 FOR-FORCE-RATIO?N1 FOR-RECOMMENDED-FORCE-RATIO ?N2 Condition … Explanations: BLUE-TASK-FORCE1 ASSIGNMENT MAIN-EFFORT1 BLUE-TASK-FORCE1 TASK PENETRATE1 OBJECT-ACTED-ON RED-MECH-COMPANY4 PENETRATE1 RECOMMENDED-FORCE-RATIO 3.0 PENETRATE1 FORCE-RATIO 2 < 3.0 1 2 3

63  G.Tecuci, Learning Agents Laboratory WORKAROUND UNMINED DESTROYED BRIDGE WITH FIXED BRIDGE OVER GAP What engineering technique to use? USE FIXED BRIDGE WITH MINOR PREPARATION OVER GAP USE FIXED BRIDGE WITH GAP REDUCTION OVER GAP USE FIXED BRIDGE WITH SLOPE REDUCTION OVER GAP minor preparationsgap reductionslope reduction SITE203 HAS-WIDTH 12, AVLB-EQ CAN-BUILD AVLB70 MAX-GAP 17  12 UNIT91010 MAX-WHEELED-MLC 20, AVLB-EQ CAN-BUILD AVLB70 MLC-RATING 70  20 UNIT91010 MAX-TRACKED-MLC 40, AVLB-EQ CAN-BUILD AVLB70 MLC-RATING 70  40 SITE103 HAS-WIDTH 25, AVLB-EQ CAN-BUILD AVLB70 MAX-GAP 17 < 25 SITE103 HAS-WIDTH 25, AVLB-EQ CAN-BUILD AVLB70 MAX-REDUCIBLE-GAP 26  25 UNIT91010 MAX-WHEELED-MLC 20, AVLB-EQ CAN-BUILD AVLB70 MLC-RATING 70  20 UNIT91010 MAX-TRACKED-MLC 40, AVLB-EQ CAN-BUILD AVLB70 MLC-RATING 70  40 SITE103 HAS-WIDTH 25, AVLB-EQ CAN-BUILD AVLB70 MAX-GAP 17 < 25 SITE103 BED SITE106 HAS-WIDTH 12, AVLB-EQ CAN-BUILD AVLB70 MAX-GAP 17 > 12 UNIT91010 MAX-WHEELED-MLC 20, AVLB-EQ CAN-BUILD AVLB70 MLC-RATING 70  20 UNIT91010 MAX-TRACKED-MLC 40, AVLB-EQ CAN-BUILD AVLB70 MLC-RATING 70  40 SITE103 BANK1 SITE105 MAX-SLOPE 190, UNIT91010 DEFAULT-NEGOCIABLE-SLOPE 25 <190 SITE103 BANK2 SITE104 MAX-SLOPE 200, UNIT91010 DEFAULT-NEGOCIABLE-SLOPE 25 < 200 Illustration from the workaround domain

64  G.Tecuci, Learning Agents Laboratory Another heuristic 1. Look for a rule R k that reduce a similar task to similar subtasks. 2. Extract the explanations E g from the rule R k. 3. Look for explanations of the current task reduction that are similar with E g.

65  G.Tecuci, Learning Agents Laboratory This heuristic is based on the observation that similar problem solving episodes tend to have similar explanations: EE' TR TR' similar explains? explains Justification of the heuristic

66  G.Tecuci, Learning Agents Laboratory WORKAROUND UNMINED DESTROYED BRIDGE WITH FIXED BRIDGE OVER GAP What engineering technique to use? USE FIXED BRIDGE WITH MINOR PREPARATION OVER GAP USE FIXED BRIDGE WITH GAP REDUCTION OVER GAP USE FIXED BRIDGE WITH SLOPE REDUCTION OVER GAP minor preparationsgap reductionslope reduction explanations WORKAROUND UNMINED DESTROYED BRIDGE WITH FLOATING BRIDGE OVER GAP What engineering technique to use? USE FLOATING BRIDGE WITH MINOR PREPARATION OVER GAP USE FLOATING BRIDGE WITH SLOPE REDUCTION OVER GAP minor preparationsslope reduction explanations Similar reductions Similar Illustration from the workaround domain

67  G.Tecuci, Learning Agents Laboratory Yet another heuristic The plausible explanations found by the agent can be ordered by their plausibility (based on the heuristics used). 1. Look for a rule R k that reduces a task that is similar to the current task even if the subtasks are not similar. 2. Extract the explanations E g from the rule R k. 3. Look for explanations of the current task reduction that are similar with E g.

68  G.Tecuci, Learning Agents Laboratory Overview Rule learning Agent teaching: Hands-on experience Illustration of rule learning in other domains Required reading A gentle introduction to Machine Learning

69  G.Tecuci, Learning Agents Laboratory Agent teaching: hands-on experience Modeling Task Formalization Rule Learning

70  G.Tecuci, Learning Agents Laboratory

71

72

73

74

75

76

77

78

79 Overview Rule learning Agent teaching: Hands-on experience Illustration of rule learning in other domains Required reading A gentle introduction to Machine Learning

80  G.Tecuci, Learning Agents Laboratory Illustration of rule learning in other domains Illustration in the workaround generation domain Illustration in the manufacturing domain Illustration in the assessment and tutoring domain Illustration in the course of action critiquing domain

81  G.Tecuci, Learning Agents Laboratory The COA critiquing domain Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: The input example:

82  G.Tecuci, Learning Agents Laboratory R$ASWCER-001 IF the task to accomplish is: ASSESS-SECURITY-WRT-COUNTERING-ENEMY-RECONNAISSANCE FOR-COA ?O1 Question: Is an enemy recon unit present in ?O1 ? Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action. Then accomplish the task: ASSESS-SECURITY-WHEN-ENEMY-RECON-IS-PRESENT FOR-COA ?O1 FOR-UNIT ?O2 FOR-RECON-ACTION ?O3 Plausible Upper Bound Condition: ?O1 IS COA-SPECIFICATION-MICROTHEORY ?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3 ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK ?O4 IS ALLEGIANCE-OF-UNIT Plausible Lower Bound Condition: ?O1 IS COA411 ?O2 IS RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3 ?O3 IS SCREEN1 ?O4 IS RED--SIDE Explanation: ?O2 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 IS RED--SIDE ?O2 TASK ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK The learned rule

83  G.Tecuci, Learning Agents Laboratory Language to logic translation Natural Language Logic Analogical reasoning Limited natural language processing Hints based interactions Question: Is an enemy reconnaissance unit present? Answer: Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1 Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 Explanation: RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE RED-CSOP1 TASK SCREEN1 SCREEN1 IS INTELLIGENCE-COLLECTION--MILITARY-TASK Assess security for COA411 wrt countering enemy reconnaissance Assess security for COA411 when enemy recon RED-CSOP1 is present and performs SCREEN1 for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 Learn the rule

84  G.Tecuci, Learning Agents Laboratory What is the form of the explanation? TASK RED-CSOP1 SCREEN1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE SCREEN-MILITARY-TASK INTELLIGENCE-COLLECTION-MILTARY-TASK INSTANCE-OF SUBCLASS-OF MODERN-MILITARY-UNIT--DEPLOYABLE … Explanation: RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE RED-CSOP1 TASK SCREEN1 SCREEN1 IS INTELLIGENCE-COLLECTION--MILITARY-TASK The explanation is a sequence of object relationships that correspond to a fragment of the object ontology. INSTANCE-OF

85  G.Tecuci, Learning Agents Laboratory Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: User guided explanation generation The expert provides a hint, such as Look for explanations related to RED-CSOP1 Disciple generates plausible explanation pieces that correspond to the hint: RED-CSOP1 ECHELON-OF-UNIT PLTOON-UNIT-DESIGNATION RED-CSOP1 EQUIPMENT-OF-UNIT ARMORED-PERSONNEL-CARRIER-BTR60 RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG RED—SIDE RED-CSOP1 OBJECT-FOUND-IN-LOCATION LOC5 The user selects the good explanation piece: RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE Is an enemy reconnaissance unit present? Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1

86  G.Tecuci, Learning Agents Laboratory Types of hints: the BEETWEEN hint BEETWEEN hint: Look for the relations between RED-CSOP1 and SCREEN1 Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: Is an enemy reconnaissance unit present? Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1 TASK RED-CSOP1 SCREEN1 RELATION1 RED-CSOP1 OBJECT1 RELATION-N RED-CSOP1 OBJECT-A RELATION-M OBJECT-B RELATION2 SCREEN1 RELATION-L SCREEN1

87  G.Tecuci, Learning Agents Laboratory Types of hints: the FROM hint FROM hint: Look for a path that starts from RED-CSOP1 Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: Is an enemy reconnaissance unit present? Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1 RELATION1 RED-CSOP1 OBJECT1 RELATION2 OBJECT2 RELATION-N RED-CSOP1 OBJECT-A RELATION-M OBJECT-B RELATION-L OBJECT-C RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE

88  G.Tecuci, Learning Agents Laboratory Types of hints: the IS hint IS hint: Look for the types of SCREEN1 Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: Is an enemy reconnaissance unit present? Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1 SCREEN1 SCREEN-MILITARY-TASK INSTANCE-OF SCREEN1 INTELLIGENCE-COLLECTION-MILTARY-TASK INSTANCE-OF SCREEN1 MILITARY-ENABLING-TASK INSTANCE-OF

89  G.Tecuci, Learning Agents Laboratory Types of hints BEETWEEN hint: Look for the relations between RED-CSOP1 and SCREEN1 Explanation: RED-CSOP1 TASK SCREEN1 FROM hint: Look for a path that starts from RED-CSOP1 Explanation: RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE IS hint: Look for the types of SCREEN1 Explanation: SCREEN1 IS INTELLIGENCE-COLLECTION--MILITARY-TASK Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: Is an enemy reconnaissance unit present? Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1

90  G.Tecuci, Learning Agents Laboratory Analogical reasoning TASK RED-CSOP2 SCREEN2 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE INTELLIGENCE-COLLECTION-MILTARY-TASK INSTANCE-OF similar similar explanation less general than TASK ?O2 analogy criterion ?O3 SOVEREIGN-ALLEGIANCE-OF-ORG INTELLIGENCE-COLLECTION-MILTARY-TASK INSTANCE-OF ?O4 less general than TASK RED-CSOP1 SCREEN1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE INTELLIGENCE-COLLECTION-MILTARY-TASK INSTANCE-OF Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: initial example explanation explain Assess security wrt countering enemy reconnaissance for-coa COA61 Assess security when enemy recon is present for-coa COA61 for-unit RED-CSOP2 for-recon-action SCREEN2 IF the task to accomplish is: THEN accomplish the task: similar example explain?

91  G.Tecuci, Learning Agents Laboratory Generalization by analogy TASK RED-CSOP1 SCREEN1 SOVEREIGN-ALLEGIANCE-OF-ORG RED--SIDE INTELLIGENCE-COLLECTION-MILTARY-TASK INSTANCE-OF Assess security wrt countering enemy reconnaissance for-coa COA411 Assess security when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 IF the task to accomplish is: THEN accomplish the task: explain Assess security wrt countering enemy reconnaissance for-coa ?O1 Assess security when enemy recon is present for-coa ?O1 for-unit ?O2 for-recon-action ?O3 IF the task to accomplish is: THEN accomplish the task: TASK ?O2 ?O3 SOVEREIGN-ALLEGIANCE-OF-ORG INTELLIGENCE-COLLECTION-MILTARY-TASK INSTANCE-OF ?O4 explain generalization Any value of ?O2 should be an instance of: DOMAIN(TASK)  DOMAIN(SOVEREIGN-ALLENGINCE-OF_ORG)  RANGE(FOR-UNIT) = MODERN-MILITARY-UNIT--DEPLOYABLE Any value of ?O3 should be an instance of: RANGE(TASK)  INTELLIGENCE-COLLECTION-MILITARY-TASK = INTELLIGENCE-COLLECTION-MILITARY-TASK Any value of ?O4 should be an instance of: RANGE(SOVEREIGN-ALLENGINCE-OF_ORG) = ALLEGIANCE-OF-UNIT Any value of ?O1 should be an instance of: RANGE(FOR-COA) = COA-SPECIFICATION-MICROTHEORY Knowledge-base constraints on the generalization:

92  G.Tecuci, Learning Agents Laboratory R$ASWCER-001 IF the task to accomplish is: ASSESS-SECURITY-WRT-COUNTERING-ENEMY-RECONNAISSANCE FOR-COA ?O1 Question: Is an enemy recon unit present in ?O1 ? Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action. THEN accomplish the task: ASSESS-SECURITY-WHEN-ENEMY-RECON-IS-PRESENT FOR-COA ?O1 FOR-UNIT ?O2 FOR-RECON-ACTION ?O3 Plausible Upper Bound Condition: ?O1 IS COA-SPECIFICATION-MICROTHEORY ?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3 ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK ?O4 IS ALLEGIANCE-OF-UNIT Plausible Lower Bound Condition: ?O1 IS COA411 ?O2 IS RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3 ?O3 IS SCREEN1 ?O4 IS RED--SIDE Justification: ?O2 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 IS RED--SIDE ?O2 TASK ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK Learned plausible version space rule

93  G.Tecuci, Learning Agents Laboratory Illustration of rule learning in other domains Illustration in the workaround generation domain Illustration in the manufacturing domain Illustration in the assessment and tutoring domain Illustration in the course of action critiquing domain

94  G.Tecuci, Learning Agents Laboratory Task reduction example Prepare river banks at site103 using bulldozer-unit201 to allow fording by unit10 Reduce the slope of site107 by direct cut using bulldozer-unit201 to allow the fording by unit10 Report that bulldozer-unit201 has been obtained by unit10 Ford bulldozer-unit201 at site103 Reduce the slope of site105 by direct cut using bulldozer-unit201 to allow the fording by unit10 Both site107 and site105 need to be reduced because their slopes are too steep for unit10 What bank needs to be prepared?

95  G.Tecuci, Learning Agents Laboratory Task formalization and learning Reduce the slope of site107, by direct cut, using bulldozer-unit201, to allow the fording of unit10. REDUCE-THE-SLOPE OF BY-DIRECT-CUT USING ?O1 ?O2 TO-ALLOW-FORDING-BY ?O3 PLB: PUB: GEOGRAPHICAL-REGION SITE107 PLB: PUB: EQUIPMENT BULLDOZER-UNIT201 PLB: PUB: MODERN-MILITARY-ORGANIZATION UNIT10 NL description Reduce the slope of ?O1, by direct cut, using ?O2, to allow the fording of ?O3. Task Example Learned Task

96  G.Tecuci, Learning Agents Laboratory Explaining the example Prepare river banks at site103 using bulldozer-unit201 to allow fording by unit10 Reduce the slope of site107 by direct cut using bulldozer-unit201 to allow the fording by unit10 Report that bulldozer-unit201 has been obtained by unit10 Ford bulldozer-unit201 at site103 Reduce the slope of site105 by direct cut using bulldozer-unit201 to allow the fording by unit10 What bank needs to be prepared? Both site107 and site105 need to be reduced because their slopes are to steep for unit10 Unit10 has a default-negotiable-slope of 25 and Site105 has a max-slope of 200 > 25. Site107 is on the opposite-side of site105. Site105 is a bank. Unit10 has a default-negotiable-slope of 25 and Site107 has a max-slope of 200 > 25. Unit10 located-at site108 and Site103 right-approach site108 and Site103 right-bank site107 Learn the rule

97  G.Tecuci, Learning Agents Laboratory If the task to accomplish is: and the question has the answer because then decompose the task into the subtasks ?O1 is site right-approach ?O4 right-bank ?O5 ?O2 is military-equipment ?O3 is military-unit located-at ?O4 default-negotiable-slope ?N1 ?O4 is approach ?O5 is geographical-bank max-slope ?N2 opposite-site ?O6 ?O6 is geographical-bank max-slope ?N3 ?N1  [ 0.0, 200.0 ] ?N2  (0.0, 1000.0 ] > ?N1 ?N3  (0.0, 1000.0 ] > ?N1 ?T3 Plausible Upper Bound ?O1 is site103 right-approach ?O4 right-bank ?O5 ?O2 is bulldozer-unit201 ?O3 is unit10 located-at ?O4 default-negotiable-slope ?N1 ?O4 is site108 ?O5 is site107 max-slope ?N2 opposite-site ?O6 ?O6 is site105 max-slope ?N3 ?N1  { 25.0 } ?N2  { 200.0 } > ?N1 ?N3  { 200.0 } > ?N1 Plausible Lower Bound Learned Rule Prepare river banks at ?O1 using ?O2 to allow fording by ?O3 What bank needs to be prepared? Both ?O5 and ?O6 need to be reduced because their slopes are to steep for ?O3 ?O3 located-at ?O4 and ?O1 right-approach ?O4 and ?O1 right-bank ?O5 ?O3 has a default-negotiable-slope of ?N1 and ?O5 has a max-slope of ?N2 > ?N1 ?O5 is on the opposite-side of ?O6 and ?O6 is a bank ?O3 has a default-negotiable-slope of ?N1 and ?O6 has a max-slope of ?N3 > ?N1 Plausible Version Space Condition Reduce the slope of ?O5 by direct cut using ?O2 to allow the fording by ?O3 Ford ?O2 at ?O1 Reduce the slope of ?O6 by direct cut using ?O2 to allow the fording by ?O3 ?T1 AFTER Report that ?O2 has been obtained by ?O3 ?T2 AFTER ?T1 AFTER ?T2

98  G.Tecuci, Learning Agents Laboratory Illustration of rule learning in other domains Illustration in the workaround generation domain Illustration in the manufacturing domain Illustration in the assessment and tutoring domain Illustration in the course of action critiquing domain

99  G.Tecuci, Learning Agents Laboratory Illustration in the manufacturing domain G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 13-23, pp. 79-101 (required reading). See:

100  G.Tecuci, Learning Agents Laboratory Illustration of rule learning in other domains Illustration in the workaround generation domain Illustration in the manufacturing domain Illustration in the assessment and tutoring domain Illustration in the course of action critiquing domain

101  G.Tecuci, Learning Agents Laboratory Illustration in the assessment and tutoring domain G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 23-32, pp. 179-228 (required reading). See:

102  G.Tecuci, Learning Agents Laboratory Required reading G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 13-32, pp. 79-101, pp. 179-228 (required). Tecuci G., Boicu M., Bowman M., and Dorin Marcu, with a commentary by Murray Burke, “An Innovative Application from the DARPA Knowledge Bases Programs: Rapid Development of a High Performance Knowledge Base for Course of Action Critiquing,” invited paper for the special IAAI issue of the AI Magazine, Volume 22, No, 2, Summer 2001, pp. 43-61. http://lalab.gmu.edu/publications/data/2001/COA-critiquer.pdf (required). Tecuci G., Boicu M., Wright K., Lee S.W., Marcu D. and Bowman M., "A Tutoring Based Approach to the Development of Intelligent Agents," to appear in Teodorescu, H.N., Mlynek, D., Kandel, A. and Zimmermann, H.J. (editors). Intelligent Systems and Interfaces, Kluwer Academic Press. 2000. http://lalab.gmu.edu/cs785/Chapter-1-book-2000.doc (required).


Download ppt " G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University Gheorghe Tecuci"

Similar presentations


Ads by Google