Presentation is loading. Please wait.

Presentation is loading. Please wait.

Knowledge Acquisition and Problem Solving

Similar presentations


Presentation on theme: "Knowledge Acquisition and Problem Solving"— Presentation transcript:

1 Knowledge Acquisition and Problem Solving
CS 785 Fall 2004 Knowledge Acquisition and Problem Solving Mixed-initiative Problem Solving and Knowledge Base Refinement Gheorghe Tecuci Learning Agents Center and Computer Science Department George Mason University

2 Overview The Rule Refinement Problem and Method: Illustration
General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading

3 The rule refinement problem (definition)
GIVEN: • a plausible version space rule; • a positive or a negative example of the rule (i.e. a correct or an incorrect problem solving episode); • a knowledge base that includes an object ontology and a set of problem solving rules; • an expert that understands why the example is positive or negative, and can answer agent’s questions. DETERMINE: • an improved rule that covers the example if it is positive, or does not cover the example if it is negative; • an extended object ontology (if needed for rule refinement).

4 Initial example from which the rule was learned
I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? US_1943 Therefore I need to Identify and test a strategic COG candidate for US_1943 This is an example of a problem solving step from which the agent will learn a general problem solving rule.

5 Learned rule to be refined
IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 Question Which is a member of ?O1 ? Answer ?O2 explanation ?O1 has_as_member ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force THEN Identify and test a strategic COG candidate for ?O2 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force INFORMAL STRUCTURE OF THE RULE THEN Identify and test a strategic COG candidate for a force The force is ?O2 FORMAL STRUCTURE OF THE RULE

6 The next slide illustrates the rule refinement process.
The agent uses the partially learned rules in problem solving. The solutions generated by the agent, when it uses the plausible upper bound condition, have to be confirmed or rejected by the expert. We will now present how the agent improves (refines) its rules based on these examples. In essence, the plausible lower bound condition is generalized and the plausible upper bound condition is specialized, both conditions converging toward one another. The next slide illustrates the rule refinement process. Initially the agent does not contain any task or rule in its knowledge base. The expert is teaching the agent to reduce the task: Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 To the task Identify and test a strategic COG candidate corresponding to a member of the US_1943 From this task the agent learns a plausible version space task reduction rule, as has been illustrated before. Now the agent can use this rule in problem solving, proposing to reduce the task Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 Identify and test a strategic COG candidate for Germany_1943 The expert accepts this reduction as correct, and the agent refines the rule. In the following we will show the internal reasoning of the agent that corresponds to this behavior.

7 Example of task reductions Learning from Examples
Rule refinement method Learning by Analogy And Experimentation Knowledge Base PVS Rule Failure explanation Example of task reductions generated by the agent Incorrect example Correct example Learning from Explanations Learning from Examples

8 … Version space rule learning and refinement UB LB + UB LB + + UB LB +
Let E1 be the first task reduction from which the rule is learned. UB The agent learns a rule with a very specific lower bound condition (LB) and a very general upper bound condition (UB). LB + E1 UB Let E2 be a new task reduction generated by the agent and accepted as correct by the expert. Then the agent generalizes LB as little as possible to cover it. LB + + E2 Let E3 be a new task reduction generated by the agent which is rejected by the expert. Then the agent specialize UB as little as possible to uncover it and to remain more general than LB. UB LB + _ + E3 _ UB=LB After several iterations of this process LB may become identical with UB and a rule with an exact condition is learned. _ + + + _ +

9 … 2 1 3 5 4 Learns Provides an example Applies Refines ?
US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Provides an example 1 Rule_15 Learns 2 I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 Rule_15 ? Applies Germany_1943 Identify and test a strategic COG candidate for Germany_1943 Which is a member of European_Axis_1943? Therefore I need to 3 Rule_15 Refines 5 Accepts the example 4

10 Rule refinement with a positive example
Positive example that satisfies the upper bound IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 explanation ?O1 has_as_member ?O2 Therefore I need to Identify and test a strategic COG candidate for Germany_1943 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Condition satisfied by positive example ?O1 is European_Axis_ has_as_member ?O3 ?O2 is Germany_1943 less general than Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation European_Axis_1943 has_as_member Germany_1943 THEN Identify and test a strategic COG candidate for a force The force is ?O2

11 The upper right side of this slide shows an example generated by the agent. This example is generated because it satisfies the plausible upper bound condition of the rule (as shown by the red arrows). This example is accepted as correct by the expert. Therefore the plausible lower bound condition is generalized to cover it as shown in the following.

12 Minimal generalization of the plausible lower bound
Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force less general than (or at most as general as) New Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force minimal generalization Plausible Lower Bound Condition (from rule) ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Condition satisfied by the positive example ?O1 is European_Axis_ has_as_member ?O2 ?O2 is Germany_1943

13 The lower left side of this slide shows the plausible lower bound condition of the rule.
The lower right side of this slide shows the condition corresponding to the generated positive example. These two conditions are generalized as shown in the middle of this slide, by using the climbing generalization hierarchy rule. Notice, for instance, that equal_partners_multi_state_alliance and European_Axis_1943 are generalized to multi_state_alliance. This generalization is based on the object ontology, as illustrated in the following slide. Indeed, multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943.

14 Forces … multi_state_alliance
composition_of_forces force single_member_force multi_member_force single_state_force single_group_force multi_state_force multi_group_force US_1943 Germany_1943 multi_state_alliance multi_state_coalition dominant_partner_ multi_state_alliance dominant_partner_ multi_state_coalition multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943 equal_partners_ multi_state_alliance equal_partners_ multi_state_coalition European_Axis_1943 Allied_Forces_1943

15 Refined rule generalization IF
Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 explanation ?O1 has_as_member ?O2 explanation ?O1 has_as_member ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force generalization Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force THEN Identify and test a strategic COG candidate for a force The force is ?O2 THEN Identify and test a strategic COG candidate for a force The force is ?O2

16 Overview The Rule Refinement Problem and Method: Illustration
General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading

17 The rule refinement method: general presentation
Let R be a plausible version space rule, U its plausible upper bound condition, L its plausible lower bound condition, and E a new example of the rule. 1. If E is covered by U but it is not covered by L then • If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U. • If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L. Alternatively, both bounds need to be specialized. 2. If E is covered by L then • If E is a positive example then R need not to be refined. • If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule. 3. If E is not covered by U then • If E is a positive example then it represents a positive exception to the rule. • If E is a negative example then no refinement is necessary. These are the main situations that the agent may encounter, with respect to a new example. We will discuss each of them in turn. Notice that both UB and LB is each a single expression, not a set, as in the case of classical version spaces.

18 The rule refinement method: general presentation
If E is covered by U but it is not covered by L then • If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U. UB UB LB LB + + + + + + Because both LB and E (the positive example) are less general than UB, there will always be a minimal generalization of them that is less general, or at most as general as UB.

19 The rule refinement method: general presentation
If E is covered by U but it is not covered by L then • If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L. Alternatively, both bounds need to be specialized. Strategy 1: Specialize UB by using a specialization rule (e.g. the descending the generalization hierarchy rule, or specializing a numeric interval rule). Similarly, there will always be a minimal specialization of UB that does not cover E (the negative example) and is more general, or at least as general as LB. UB UB _ _ LB LB + + + +

20 The rule refinement method: general presentation
Strategy 2: Find a failure explanation EXw of why E is a wrong problem solving episode. EXw identifies the features that make E a wrong problem solving episode. The inductive hypothesis is that the correct problem solving episodes should not have these features. EXw is taken as an example of a condition that the correct problem solving episodes should not satisfy, an Except-When condition. The Except-when condition needs also to be learned, based on additional examples. Based on EXw an initial Except-When plausible version space condition is generated. UB UB LB LB + + _ + +

21 The rule refinement method: general presentation
Strategy 3: Find an additional explanation EXw for the correct problem solving episodes, which is not satisfied by the current wrong problem solving episode. Specialize both bounds of the plausible version space condition by: - adding the most general generalization of EXw, corresponding to the examples encountered so far, to the upper bound; - adding the least general generalization of EXw, corresponding to the examples encountered so far, to the lower bound. UB UB LB LB + _ + _ + +

22 The rule refinement method: general presentation
2. If E is covered by L then • If E is a positive example then R need not to be refined. UB LB + + +

23 The rule refinement method: general presentation
2. If E is covered by L then • If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule. Strategy 1: Find a failure explanation EXw of why E is a wrong problem solving episode and create an Except-When a plausible version space condition, as indicated before. UB UB LB LB - + - + + +

24 The rule refinement method: general presentation
3. If E is not covered by U then • If E is a positive example then it represents a positive exception to the rule. • If E is a negative example then no refinement is necessary. - + UB UB LB LB + + + + Because UB is the most general generalization of the positive examples that do not cover the negative examples, it can no longer be generalized to cover the new positive example. This may change, however, if the knowledge representation is extended (e.g. by adding new concepts in the ontology), as part of the exception handling process (which is not addressed here).

25 Overview The Rule Refinement Problem and Method: Illustration
General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading

26 Industrial_capacity_of_US_1943
Initial example from which a rule was learned IF the task to accomplish is Identify the strategic COG candidates with respect to the industrial civilization of US_1943 Who or what is a strategically critical industrial civilization element in US_1943? THEN Industrial_capacity_of_US_1943 industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943 This slide shows an example of a problem solving step from which the agent will learn a rule. The domain used for illustration is Center of Gravity analysis.

27 Learned PVS rule to be refined
IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF Identify the strategic COG candidates with respect to the industrial civilization of ?O1 Question Who or what is a strategically critical industrial civilization element in ?O1 ? Answer ?O2 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product THEN ?O2 is a strategic COG candidate for ?O1 INFORMAL STRUCTURE OF THE RULE Plausible Lower Bound Condition ?O1 IS US_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_ is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 This is the rule that is learned from the previous example. It has both a formal structure (used for formal reasoning), and an informal structure (used to communicate more naturally with the user). Let us consider the formal structure of the rule. This is an IF-THEN structure that specifies the condition under which the task from the IF part can be reduced to the task from the THEN part. This rule, however, is only partially learned. Indeed, instead of a single applicability condition, it has two conditions: 1) a plausible upper bound condition which is more general than the exact (but not yet known) condition, and 2) a plausible lower bound condition which is less general than the exact condition. Completely learning the rule means learning this exact condition. However, for now we will show how the agent learns the rule in this slide from the example shown on the previous slide. THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 FORMAL STRUCTURE OF THE RULE

28 Positive example that satisfies the upper bound
Positive example covered by the upper bound Positive example that satisfies the upper bound IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF the task to accomplish is Identify the strategic COG candidates with respect to the industrial civilization of a force The force is Germany_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 THEN accomplish the task A strategic COG relevant factor is strategic COG candidate for a force The force is Germany_1943 The strategic COG relevant factor is Industrial_capacity_of_Germany_1943 Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Condition satisfied by positive example ?O1 IS Germany_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_Germany_ is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_fuel_of_Germany_1943 less general than Plausible Lower Bound Condition ?O1 IS US_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_ is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 The upper right side of this slide shows an example of the rule. This example may be generated by the agent during the mixed-initiative problem solving process because it satisfies the plausible upper bound condition of the rule (as shown by the red arrows). This example is accepted as correct by the expert. Therefore the plausible lower bound condition is generalized to cover it, as shown in the following. explanation Germany_1943 has_as_industrial_factor Industrial_capacity_of_Germany_1943 Industrial_capacity_of_Germany_ is_a_major_generator_of War_materiel_and_fuel_of_Germany_1943 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2

29 Minimal generalization of the plausible lower bound
Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product less general than (or at most as general as) New Plausible Lower Bound Condition ?O1 IS Single_state_force has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials The lower left side of this slide shows the plausible lower bound condition of the rule. The lower right side of this slide shows the condition corresponding to the generated positive example. These two conditions are generalized as shown in the middle of this slide, by using the climbing generalization hierarchy rule. Notice, for instance, that US_1943 and Germany_1943 are generalized to Single_state_force These generalizations are based on the object ontology, as illustrated in the following. minimal generalization Plausible Lower Bound Condition (from rule) ?O1 IS US_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_ is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 Condition satisfied by the positive example ?O1 IS Germany_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_Germany_ is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_fuel_of_Germany_1943

30 Generalization hierarchy of forces
<object> Force Group Opposing_force Multi_state_force Single_state_force Multi_group_force Single_group_force component_state US_1943 Anglo_allies_1943 This is a fragment of the object ontology. Suppose that I want to learn a concept and I am told that US_1943 and Germany_1943 are positive examples of this concept. Based on this concept hierarchy I may infer that the concept might be Single_state_force. Now let us assume that I want to learn another concept and I am told that European_Axis and Anglo_allies_1943 are positive examples of this concept. Again, based on this object hierarchy, I may infer that the concept might either be Opposing_force or Multi_state_force. This is how the agent performed the generalizations shown on the previous slide. component_state Britain_1943 component_state Germany_1943 European_axis_1943 component_state Italy_1943

31 Generalized rule IF IF THEN THEN
Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O4 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Plausible Lower Bound Condition ?O1 IS US_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_ is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 Plausible Upper Bound Condition ?O1 IS Single_state_force has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials This slide shows how the rule was generalized (by generalizing the instances from its plausible lower bound condition). THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2

32 Negative example that satisfies the upper bound
A negative example covered by the upper bound Negative example that satisfies the upper bound IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF the task to accomplish is Identify the strategic COG candidates with respect to the industrial civilization of a force The force is Italy_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 THEN accomplish the task A strategic COG relevant factor is strategic COG candidate for a force The force is Italy_1943 The strategic COG relevant factor is Farm_implement_industry_of_Italy_1943 Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Condition satisfied by positive example ?O1 IS Italy_1943 has_as_industrial_factor ?O2 ?O2 IS Farm_implement_industry_of_Italy_ is_a_major_generator_of ?O3 ?O3 IS Farm_implements_of_Italy_1943 Plausible Upper Bound Condition ?O1 IS Single_state_force has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials less general than Let us now consider the case of a wrong task reduction generated by the agent. This example is generated because it satisfies the plausible upper bound condition of the rule. explanation Italy_1943 has_as_industrial_factor Farm_implement_industry_of_Italy_1943 Farm_implement_industry_of_Italy_ is_a_major_generator_of Farm_implements_of_Italy_1943 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2

33 Industrial_capacity_of_Italy_1943
Automatic generation of plausible explanations IF Identify the strategic COG candidates with respect to the industrial civilization of Italy_1943 No! Who or what is a strategically critical industrial civilization element in Italy_1943? explanation Italy_1943 has_as_industrial_factor Farm_implement_industry_of_Italy_1943 Farm_implement_industry_of_Italy_ is_a_major_generator_of Farm_implements_of_Italy_1943 Industrial_capacity_of_Italy_1943 THEN Industrial_capacity_of_Italy_1943 is a strategic COG candidate for Italy_1943 The agent tries to understand why the example is not correct, despite the fact that it satisfies the plausible upper bound condition of the rule. Again it generates a list of plausible explanations from which the expert has to select the correct one. In this case, the explanation generated by the agent and selected by the expert is: Farm_implements_of_Italy_ IS_NOT Strategically_essential_goods_or_materiel Therefore it would not be correct to conclude that Industrial_capacity_of_Italy_1943 is a strategic_COG_candidate for Italy_1943 (because it does not produce a strategically essential good or material). The agent generates a list of plausible explanations from which the expert has to select the correct one: Farm_implement_industry_of_Italy_ IS_NOT Industrial_capacity Farm_implements_of_Italy_ IS_NOT Strategically_essential_goods_or_materiel

34 Minimal specialization of the plausible upper bound
Plausible Upper Bound Condition (from rule) ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product specialization Condition satisfied by the negative example ?O1 IS Italy_1943 has_as_industrial_factor ?O2 ?O2 IS Farm_implement_industry_of_Italy_ is_a_major_generator_of ?O3 ?O3 IS Farm_Implements_of_Italy_1943 New Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materiel As a consequence, the agent specializes the plausible upper bound condition of the rule, as guided by this explanation, to no longer cover this negative example. In particular, “Product” is specialized to “Strategically_essential_good_or_material”. Therefore ?O3 can no longer be any product. It should always be some strategically essential good or material. more general than (or at least as general as) New Plausible Lower Bound Condition ?O1 IS Single_state_force has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materiel

35 Fragment of the generalization hierarchy
<object> Resource_or_ infrastructure_element Product UB Strategically_essential_resource_ or_infrastructure_element Raw_material specialization Non-strategically_essential goods_or_services subconcept_of Strategic_raw_material Strategically_essential_ goods_or_materiel LB This fragment of the object ontology explains how the agent proposed the explanation of the negative example and why the previous specialization was performed. Suppose that we want to learn a concept and we know the following things: - its lower bound is Strategically_essential_goods_or_material (i.e. the concept is at least as general as this one); - its upper bound is Product (i.e. the concept is at most as general as this one). Now if we are told that “Farm_implements_of_Italy_1943” is not a positive example of this concept, then we have to specialize Product to no longer cover it. The only specialization possible is to “Strategically_essential_goods_or_material”, which represents the new upper bound and is the same with the lower bound. These specializations are done by the agent. The expert does not need to deal with them. subconcept_of instance_of Strategically_essential_ infrastructure_element War_materiel_and_transports subconcept_of + War_materiel_and_fuel + Main_airport Main_seaport Farm-implements of_Italy_1943 _ Sole_airport Sole_seaport

36 Specialized rule IF IF THEN THEN
Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Plausible Upper Bound Condition ?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials Plausible Upper Bound Condition ?O1 IS Single_state_force has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials Plausible Upper Bound Condition ?O1 IS Single_state_force has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials This slide shows how the rule was specialized (by specializing “Product” from the upper bound to “Strategically_essential_goods_or_material”). This process continues until the two bound become identical. THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2

37 Overview The Rule Refinement Problem and Method: Illustration
General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading

38 Mixed-Initiative Problem Solving
Control of modeling, learning and problem solving Input Task Mixed-Initiative Problem Solving Ontology + Rules Generated Reduction New Reduction Accept Reduction Reject Reduction Solution Rule Refinement Task Refinement Modeling Rule Refinement Formalization Learning

39 This slide shows the interaction between the expert and the agent when the agent has already learned some rules. This interaction is governed by the mixed-initiative problem solver. The expert formulates the initial task. Then the agent attempts to reduce this task by using the previously learned rules. Let us assume that the agent succeeded to propose a reduction to the current task. The expert has to accept it if it is correct, or he has to reject it, if it is incorrect. If the reduction proposed by the agent is accepted by the expert, the rule that generated it and its component tasks are generalized. Then the process resumes, the agent attempting to reduce the new task. If the reduction proposed by the agent is rejected, then the agent will have to specialize the rule, and possibly its component tasks. In this case the expert will have to indicate the correct reduction, going through the normal steps of modeling, formalization, and learning. Similarly, when the agent cannot propose a reduction of the current task, the expert will have to indicate it, again going through the steps of modeling, formalization and learning. The control of this interaction is done by the mixed-initiative problem solver tool.

40 A systematic approach to agent teaching
Identify and test a strategic COG candidate for the Sicily_1943 scenario Allies_Forces_1943 European_Axis_1943 individual states alliance individual states alliance 14 13 US_1943 Britain_1943 Germany_1943 Italy_1943 11 12 government government Other factors Other factors 1 6 people people 5 10 2 7 military military 3 8 economy economy 4 9

41 This slide shows a recommended order of operations for teaching the agent:
Modeling for branches #1 through #5 Rule Learning for branches #1 through #5 Problem solving, Rule refinement, Modeling, and Rule Learning for branches #6 through #10 You will notice that several of the rules learned from branch #1 will apply to generate branch #6. One only needs to model and teach Disciple for those steps where the previously learned rules do not apply (i.e. for the aspects where there are significant differences between US_1943 and Britain_1943 with respect to their governments). Similarly, several of the rules learned from branch #2 will apply to generate branch #7, an so on. Problem solving, Rule refinement, Modeling, and Rule Learning for branches #11 and #12 Again, many of the rules learned from branches #1 through #10, will apply for the branches #11 and #12. Modeling for branches #13 Rule Learning for branches #13 Problem solving, Rule refinement, Modeling, and Rule Learning for branches #14

42 Overview The Rule Refinement Problem and Method: Illustration
General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading

43 Characterization of the PVS rule

44 This slide shows the relationship between the plausible lower bound condition, the plausible upper bound condition, and the exact (hypothetical) condition that the agent is attempting to learn. During rule learning, both the upper bound and the lower bound are generalized and specialized to converge toward one another and toward the hypothetical exact condition. This is different from the classical version space method where the upper bound is only specialized and the lower bound is only generalized. Notice also that, as opposed to the classical version space method (where the exact condition is always between the upper and the lower bound conditions), in Disciple the exact condition may not include part of the plausible lower bound condition, and may include a part that is outside the plausible upper bound condition. We say that the plausible lower bound is, AS AN APPROXIMATION, less general than the hypothetical exact condition. Similarly, the plausible upper bound is, AS AN APPROXIMATION, more general than the hypothetical exact condition. These characteristics are a consequence of the incompleteness of the representation language (i.e. the incompleteness of the object ontology), of the heuristic strategies used to learn the rule, and of the fact that the object ontology may evolve during learning.

45 Characterization of the rule learning method
Uses the explanation of the first positive example to generate a much smaller version space than the classical version space method. Conducts an efficient heuristic search of the version space, guided by explanations, and by the maintenance of a single upper bound condition and a single lower bound condition. Will always learn a rule, even in the presence of exceptions. Learns from a few examples and an incomplete knowledge base. Uses a form of multistrategy learning that synergistically integrates learning from examples, learning from explanations, and learning by analogy, to compensate for the incomplete knowledge. Uses mixed-initiative reasoning to involve the expert in the learning process. Is applicable in complex real-world domains, being able to learn within a complex representation language.

46 Problem solving with PVS rules
PVS Condition Except-When PVS Condition Rule’s conclusion is (most likely) incorrect Rule’s conclusion is (most likely) incorrect This slide shows how the agent treats an instance, based on which of the condition bounds are satisfied. The rule is assumed to have a main PVS condition, and an Except-When PVS condition. Rule’s conclusion is plausible Rule’s conclusion is not plausible Rule’s conclusion is (most likely) correct

47 Overview The Rule Refinement Problem and Method: Illustration
General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading

48 Disciple-RKF/COG: Integrated Modeling, Learning and Problem Solving

49 This is done in the Refining mode.
Disciple uses the partially learned rules in problem solving and refines them based on expert’s feedback. This is done in the Refining mode.

50 Disciple applies previously learned rules in other similar cases
The expert can expand the “More…” node to view the solution generated by the rule

51 Disciple uses the rule learned from Republican Guard Protection Unit to the System of Saddam doubles
The “?” indicates that Disciple is uncertain whether the reasoning step is correct.

52 The expert has to examine this step and has to indicate whether it is:
correct but incompletely explained by selecting “Explain Example” correct and completely explained by selecting “Correct Example” incorrect by selecting “Incorrect Example”

53 The expert has indicated that the reasoning step is correct and Disciple has generalized the plausible lower bound condition of the corresponding rule, to cover this example.

54 Following the same procedure, Disciple generalized the plausible lower bound condition of the rule used to generate this elementary solution.

55 This is done with the Modeling tool.
Another protection means of Saddam Hussein is the Complex of Bunkers of Iraq Since this means of protection is different from the previously identified ones, the learned rules do not apply. The expert has to provide the modeling that identifies the Complex of Bunkers of Iraq 2003 as a means for protection of Saddam Hussein and to test it for any significant vulnerabilities. This is done with the Modeling tool.

56 Disciple starts the modeling tool with the appropriate task and suggests the question to ask

57 The expert develops a complete modeling for the Complex of Bunkers of Iraq 2003
When the modeling is completed, the expert returns to the teaching tool

58 The expert can now learn new rules for the Complex of Bunkers of Iraq 2003 as means of protection for Saddam Hussein

59 Recommended reading Tecuci G., Boicu M., Boicu C., Marcu D., Stanescu B., Barbulescu M., The Disciple-RKF Learning and Reasoning Agent, Research Report submitted for publication, Learning Agents Center, George Mason University, September 2004. G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp , pp , pp , pp Tecuci G., Boicu M., Bowman M., and Dorin Marcu, with a commentary by Murray Burke, “An Innovative Application from the DARPA Knowledge Bases Programs: Rapid Development of a High Performance Knowledge Base for Course of Action Critiquing,” invited paper for the special IAAI issue of the AI Magazine, Volume 22, No, 2, Summer 2001, pp Boicu M., Tecuci G., Stanescu B., Marcu D. and Cascaval C., "Automatic Knowledge Acquisition from Subject Matter Experts," in Proceedings of the IEEE International Conference on Tools with Artificial Intelligence, Dallas, Texas, November 2001.


Download ppt "Knowledge Acquisition and Problem Solving"

Similar presentations


Ads by Google