Download presentation
1
Analogical Learning and Problem Solving
1 Aug 2011 Ramakrishna, Abhijit, Anup,
2
Duncker's (1945) “Radiation Problem"
2
3
Attack-Dispersion story (Cont.)
4
Known Solution
5
Analogy The analogous solution to the radiation problem is to simultaneously direct multiple low-intensity rays toward the tumor from different directions. In this way the healthy tissue will be left unharmed, but the effects of the multiple low-intensity rays will summate and destroy the tumor 5
7
What is it ? Machine Learning Intuition Supervised Learning
Inductive Learning Decision Trees, Version Space etc. Analogical Learning Unsupervised Learning Reinforcement Learning Intuition If two entities are similar in some respects then they could be similar in other respects as well
8
Steps in Analogical Problem Solving
Identifying Analogy Find a known/experienced problem that is analogous to the input problem Mapping Relevant parts of the experienced problem are selected and mapped on to the input problem to transform the new problem and derive some hypothesis for the solution Validation The correctness of the newly constructed solution is validated using theorem or simulation etc. Learning If the validation is found to work well, the new knowledge is encoded and saved for future usage.
9
Analogy Representation
Appropriate level of “analogical structures” needs to be represented Too Detailed or Too Abstract representation will make it had to draw analogy Example
10
Analogical Structures
A fortress was located in the center of the country. Many roads radiated out from the fortress. A general wanted to capture the fortress with his army. The general wanted to prevent mines on the roads from destroying his army and neighboring villages. As a result the entire army could not attack the fortress along one road. However, the entire army was needed to capture the fortress. So an attack by one small group would not succeed. The general therefore divided his army into several small groups. He positioned the small groups at the heads of different roads. The small groups simultaneously converged on the fortress. In this way the army captured the fortress. A tumor was located in the interior of a patient's body. - A doctor wanted to destroy the tumor with rays. The doctor wanted to prevent the rays from destroying healthy tissue. As a result the high-intensity rays could not be applied to the tumor along one path. However, high-intensity rays were needed to destroy the tumor. So applying one low-intensity ray would not succeed. The doctor therefore divided the rays into several low intensity rays. He positioned the low intensity rays at multiple locations around the patient's body. The low-intensity rays simultaneously converged on the tumor. In this way the rays destroyed the tumor.
11
Predicate Analogy Sample Predicates Mapping Inference
locate(fortress, centre(country)) locate(tumour, interior(body)) desire(general, capture(army, fortress)) desire(doctor, destroy(rays, tumour)) Mapping fortress → tumour general → doctor army → rays Inference divide(general, small_group(army)) divide(doctor, small_group(rays)) position(small_group(army), different_direction) position(small_group(rays), different_direction) simultaneous(converge(small_group(army), fortress)) simultaneous(converge(small_group(rays),tumour)) destroy(army, fortress) destroy(rays, tumour)
12
Analogical Problem Solving
Needs sophisticated techniques for Representation of the problem and solution Matching input problem with existing problem Concluding inferences Very hard problem Let us see one system - EUREKA
13
EUREKA An analogical problem solving architecture presented by RANDOLPH M. JONES and PAT LANGLEY Quick look at EUREKA Existing problem New problem. Solve by analogy to previous problem Consists of Memory Module Problem Solving Engine A B C E F G H
14
Memory Representation
As Semantic Network Example network for block movement problem State100 Trans1 Transforms to a new state which satisfies the goal Goal17 Ref: Randolph Jones; Problem Solving via Analogical Retrieval and Analogical Search Control
15
Problem Solving Engine
Uses Flexible Means-Ends Algorithm Two recursive functions are used TRANSFORM APPLY State P Op1 State C State X State Z
16
Transform and Apply Procedures
TRANSFORM(StateX,Conditions): Returns StateZ If StateX satisfies Conditions Then Return StateZ as StateX Else Let Operator be SELEC_OPERAT0R(StateX,Conditions); If Operator is empty Then Return StateZ as "Failed State1' Else Let StateY be APPLY(StateX,Operator); If StateY is "Failed State" Then Return StateZ as "Failed State" Else Return StateZ as TRANSFORM(StateY,Conditions) APPLY(StateX,Operator): Returns StateY Let P be PRECONDITIONS(Operator); If StateX s a t i s f i e s P Then Return StateY as EXECUTE(StateX,Operator) Else Let StateW be TRANSFORM(StateX,P); If StateW i s "Failed State" Then Return StateY as "Failed State" Else Return StateY as APPLY(StateW,Operator) Ref: Randolph Jones; Problem Solving via Analogical Retrieval and Analogical Search Control
17
Example Transforming from the initial state (State100) to the final state (State200) which satisfies the given goal (Goal17) Ref: Randolph Jones; Problem Solving via Analogical Retrieval and Analogical Search Control
18
Identifying Analogy and Problem Solving
Collect some analogical TRANSFORM goals from the existing solution Use spreading techniques from the current state and goal nodes Eg: Trans1, Trans10 Select the best matching goals Degree of match History data (success/failure) Make concept analogy E → A; F → B Proceed as per the analogical TRANSFORM goal
19
Identifying Analogy and Problem Solving Contd…
Closest Matching TRNSFORMATION Goal State500 Goal99 On5 On6 On.. E F ……….
20
EUREKA – Initial Stage EUREKA started without any knowledge in its long-term memory For eg. First Time never be able to solve any problems, because there would be no previously solved problems on which to base future decisions Operators are stored in the form of simple problems
21
Case Based Reasoning (CBR)
Reasoning method that uses specific past experiences rather than a corpus of general knowledge. Problem solving by analogy Recognizing its similarity to a specific known problem Transferring the solution of the known problem to the new one. Form of intra-domain analogy.
22
CBR terminologies Case : Denotes a problem situation
Past case : A previously experienced situation which has been captured and learned for reuse. Case based reasoning : A cyclic and integrated process of solving a new problem.
23
Learning in CBR Natural by-product of problem solving.
The experience of a successfully solved is retained inorder to solve future problems If an attempt to solve problem fails, reason of failure is remembered in order to avoid same mistake in future.
24
CBR Cycle RETRIEVE Retrieve one or more previously experienced cases.
REUSE Reusing the case in one way or another. REVISE Revising the solution based on reusing a previous case. RETAIN Retaining the new experience by incorporating it into existing knowledge base.
25
CBR Cycle Fig . Taken from: Agnar Aamodt, Enric Plaza, “Case-Based Reasoning Foundational Issues, Methodological Variations, and System Approaches”, AI communications, 1994.
26
How “cases” are represented ?
Figure taken from “Agnar Aamodt and Enric Plaza, “Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches”,AI Communications, Vol. 7 Nr. 1, March 1994, pp 39-59”
27
How “cases” are represented ?
In dynamic memory model, cases that share similar properties are organized under a more general structure known as generalized episode – GE. A generalized episode contains Norms denote features common to all cases indexed under a GE. Indices are features that discriminate between a GE's cases may point to a more specific generalized episode, or directly to a case. An index is composed of Index name Index value Cases
28
How to find a similar problem ?
When a new case description is given and the best matching is searched for, the input case structure is 'pushed down' the network structure, starting at the root node. When one or more features of the case match one or more features of a GE, the case is further discriminated based on its remaining features. Eventually, the case with most features in common with the input case is found.
29
How a new case is trained to system ?
During storing of a new case, it is discriminated by indexing it under different indices below its most specific generalized episode. If, during the storage of a case - two cases (or two GEs) end up under the same index, a new generalized episode is automatically created.
30
Norms: Disease symptoms
An example... Generalized Episode 1 Norms: Disease symptoms Indices: Index: Values: Fever symptoms Dysentry symptoms Generalized Episode 2 Norms: Fever Indices: Generalized Episode 3 Norms: Dysentry Indices: Index: Index: food poisoning symptoms Values: Influenza symptoms Swine Flu Symptoms Malaria symptoms Values: Cholera Symptoms Amoebic symptoms Cases: Influenza Swine Flu Malaria Cases: food poisoning Cholera Amoebic dysentry
31
An example... Consider a doctor patient senario.
The case of diseases trained are already stored in the tree fashion. It has got Generalized Episodes(GE) as the entity. Suppose a new patient comes with malaria symptoms. Current case symptoms are compared with the already existing ones. So it is directed to the left subtree. Here norms give the features common to that GE. Then cases are distinguished according to the symptoms which are specific to those. Ultimately the current case is directed to the index 3 where it is matched and the solution to the already stored case is applied to the current case.
32
Protos Case-based problem solving and learning system for heuristic classification tasks. In Protos, a concept ci is represented extensionally as a collection of examples (called exemplars or cases): ci = {ei1, ei2, ...}.
33
The classification and learning algorithm
Input: a set of exemplar-based categories C = {c1, c2, ... , cn} and a case (NewCase) to classify. REPEAT Classify: Find an exemplar of ci є C that strongly matches NewCase and classify NewCase as ci. Explain the classification. Learn: If the expert disagrees with the classification or explanation then acquire classification and explanation knowledge and adjust so that NewCase is correctly classified and explained. UNTIL the expert approves the classification and explanation.
34
Conclusion Analogical problem solving technique
Supervised machine learning Helps in reducing search space for some kind of problems where analogy can be drawn Case Based Reasoning CBR gives a new way to solve problems. It emphasizes on problem solving as well as learning. Has got a potential of leading to significant breakthrough in AI.
35
References Randolph M. Jones, Pat Langley, “Retrieval and Learning in Analogical Problem Solving”, Proceedings of the Seventeenth Conference of the Cognitive Science Society, 1995, pp RANDOLPH M. JONES, PAT LANGLEY “A CONSTRAINED ARCHITECTURE FOR LEARNING AND PROBLEM SOLVING”, Computational Intelligence, Volume 21, Number 4, 2005 Randolph M. Jones, “Problem Solving via Analogical Retrieval and Analogical Search Control”, Book “Foundations of Knowledge Acquisition: Machine Learning”, 1993 Mary L. Gick and Keith J. Holyoak, "Analogical Problem Solving“, Cognitive Psychology, 1983, pp Agnar Aamodt and Enric Plaza, “Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches”,AI Communications, Vol. 7 Nr. 1, March 1994, pp Bareiss, R. (1988): PROTOS; a unified approach to concept representation, classification and learning.Ph.D. Dissertation, University of Texas at Austin, Dep.of Computer Sciences Technical Report AI88-83.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.