Download presentation
Presentation is loading. Please wait.
Published byBarbara Robbins Modified over 9 years ago
1
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling1 Chapter 8: Organizational Structures and Retrieval Algorithms This chapter deals with how to find and retrieve cases from the case base for use in later problem solving or situation assessment You want to find the cases that are most useful for your present purposes Typically, but not always, these are the most similar cases We use matching and ranking procedures to compare cases and determine which will be most useful Matching and ranking is covered in the next chapter
2
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling2 High Level Overview At a high level, we want to: 1)Assess the new situation. Find the important features of it, that is, its indexes 2)Search the case base for partially matching cases 3)Retrieve the cases found 4)Choose the best case(s) The last two steps may be sequential or interleaved Inserting new cases works similarly The first two steps are the same Instead of retrieving cases, you insert the new case near the closest matching case How you search a case base depends on its organization
3
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling3 Example from MEDIATOR
4
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling4 Example from MEDIATOR (continued)
5
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling5 Flat Memory, Serial Search The organization shown at the bottom of the last slide is called flat memory, serial search All cases are stored in an array-like structure A new case is matched against each case in the case base The best match or matches are returned Details come later, but the goodness of a match depends on: 1)How closely the cases match along each dimension 2)How important each dimension is This is the simplest organization and retrieval method Inserting new cases is also simple - they can be put anywhere at all Note that case base organization is independent of case structure The cases themselves don’t have to be simple to use flat memory, serial search
6
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling6 Speeding Up Flat Memory, Serial Search A drawback of this approach has been speed There are three ways this approach can be made faster, short of choosing a different approach 1)Shallow indexing: Create a separate small file containing only indexes. The indexes point to cases that include them. Search the small index file and only fully consider cases pointed to by the indexes. This achieves speed at the expense of accuracy 2)Partitioning the case base: Divide the case base into smaller case bases along some important dimension. CHEF can do this, because stir-fry, noodle, and souffle dishes are independent. This works if the partitions are truly disjoint and if the partitions don’t grow too large themselves 3)Parallel processing: Search in parallel using multiple processors. The only down side to this is the expense of the multi-processor system
7
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling7 Hierarchical Memory Organizations In a shared feature network, you cluster cases together, so that cases that share many features are near each other in memory You build a tree with features on internal nodes and cases as leaves It’s quicker to search a tree than a list It’s harder to build the tree than the list, because new cases must be inserted in the right places, and the tree needs to stay balanced as it grows If you don’t check every case, you can’t guarantee that you don’t miss a good one
8
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling8 A Shared Feature Network for MEDIATOR
9
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling9 Prioritized Shared Feature Network A twist on the shared feature network is the prioritized shared feature network Here, you organize the tree with the most important feature at the root and more important features higher in the tree This helps to ensure that you consider all the cases that match on the most important features
10
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling10 A Prioritized Shared Feature Network for MEDIATOR
11
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling11 Discrimination Networks The most popular type of hierarchical memory organization is the discrimination network Cases are organized by features that help to tell cases apart Internal nodes ask questions, and cases are filed under the nodes according to their answers to the questions
12
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling12 Discrimination Network for MEDIATOR
13
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling13 More on Discrimination Networks One advantage of this approach is that machine learning algorithms can build discrimination trees automatically Another advantage is that this approach is especially natural for troubleshooting The main disadvantage of this approach is that when you don’t know the answer to a question, you get stuck In medical domains, for example, you seldom have answers to all of the questions, but you still want to do partial matching based on incomplete information Multiple discrimination networks help to counter this problem Here, you use multiple trees with the questions arranged in different orders and search all trees in parallel
14
CS 682, AI:Case-Based Reasoning, Prof. Cindy Marling14 More on Discrimination Networks CHEF used multiple discrimination networks CHEF’s indexes were the internal nodes of the tree The indexes were used both to tell what was important about cases and as physical pointers to cases for reasons of efficiency Hierarchical memory organizations are not as important for reasons of efficiency as they used to be However, when you use a flat memory organization, it’s important that the information that would otherwise be stored in hierarchical indexes gets moved into the cases themselves
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.