Download presentation
Presentation is loading. Please wait.
Published byLaureen Cross Modified over 8 years ago
1
For: CS590 Intelligent Systems Related Subject Areas: Artificial Intelligence, Graphs, Epistemology, Knowledge Management and Information Filtering Application of the A* Informed Search Heuristic to finding Information on the World Wide Web Daniel J. Sullivan, Presenter Date: April 30 th, 2003 Location: SL210
2
Problem Domain This project explores the application of the A* heuristic search function to the problem of document retrieval and classification based upon a relevancy criterion. This work includes a modification of A* and proposes a means of determining relevancy as a function of independent textual mappings.
3
The Principal Objectives of this Project 1.The problem of retrieving useful information from the WWW. 2.The A* (A-Star) heuristic approach to searching a state space. 3.The development of a simple relevance heuristic which does not require a large sample base. 4.The development and testing of a basic search agent.
4
The A* Heuristic 1.An informed search technique. 2.A function which evaluates the total cost of a path based upon the actual cost [ G(n) ] up to the current node and the estimated cost [ H(n) ] from the current node to the goal node. 3.Requires an effective means of predicting expected path cost and it needs to be admissible. Admissible means that the H(n) cannot overestimate the cost of reaching the goal node.
5
A* Function with User Set Time Limit 1.F(n) = G(n) + H(n) where n = time in Seconds 2.G(n) = Total Time Elapsed 3.Document Value = Relevance * Size (in bytes) 4.CRI = Current Bytes of Relevant Information 5.BP (Best Path) = Max_Bandwidth * total_time_avail, this is the perfect case and serves as the admissibility criterion 6.H(n) = (BP-CRI)/DV, which yields the number of seconds left if this path is followed. 7.Links with lowest total time left to reach information goal are inserted into the priority queue and are explored first.
6
Relevance 1.The technique used in this project is simply a comparison of text sample features. 2.It begins with a single sample without specifying how large the sample needs to be. 3.It uses more than one functional mapping for comparison and expects that the weights assigned to each mapping accurately reflect their specificity.
7
1 : S → WL, where S is the sample document and WL is the set of ordered pairs (a,b), such that a is a word in S and b is its relative frequency. This is the most basic lexical comparison between documents. 2 : S → WC, where S is the sample document and WC is a set of ordered pairs (a,b), such that a is a content related token (a C) from S and b is the relative frequency of this token. 3 : S → TC, where S is same as above and TC is a set of ordered pairs (a,b), such that a is a O and b is the relative frequency of a. 4 : S → OP, where S is same as above and OP is a set of 3-tuples (a,b,c), such that a is is S, b is in S, b is 1 place ahead in the ordering than a or b = a + 1. And c is the relative frequency of this pair of words. 5 : S → MXST, where S is same as above and MXST represents a set of 3- tuples, which is a subset of OP and represents the maximum spanning tree connecting all words in the document based upon their ordering. Text Document Mappings
8
Document 1: This is the original sample document. The user wants to find a maximum of related materials – related to this sample. Document 2: This is the document downloaded from the WWW. This document may contain related information. This is the set produced by F3 above and should show similarity for most documents, even those which are not really relevant. But are there small distinctions? Which can be used to judge similarity? The diagonal region is meant to indicate intersection between sets. For this case, lets assume this is a comparison using the F2 mapping. The intersection is small, but clearly not of the same magnitude as testing whether these documents use a similar frequency of operators. In this case, it is obvious we would not want to weight these mappings the same. Value of Different Mappings?
9
Reasons to work with Web Search Agents… 1.To investigate general and common problems for all forms of intelligence. 2.To experiment in a domain where machines are on a more ‘equal’ footing in terms of perception. 3.To confront the common and real problem of information overload.
10
What my program does.... This program takes a sample of text (and possibly a very small sample) and conducts a search for similar text documents (html format) on the World Wide Web.
12
Principal Objects in Design HEURISTICS: This part of the program contains all of the code which is related to the main investigation of this thesis. It contains the A* as implemented. PRIORITY QUEUE: This part of the program ensures that only the links with the lowest A* value will be visited first. CONNECTION OBJECT: This part of the program opens up a connection with a web site and downloads the information, and this information is returned. TEXT PROCESSOR: This portion of the program prepares information for processing, removes links and initializes key data points. DATABASE: This portion of the program manages important data which needs to be persistent. LINK OBJECT: This object is the actual data type managed by the Priority Queue and contains two values: a hyperlink and an A* score. VISIT LIST: This object is a hash table (I am simply using a HASH as implemented by PERL) which ensures that there are not duplicate visits. MAIN: All of the functionality, including the execution of A* is included in the Main Module.
13
PROCESS TEXT SAMPLE AND CREATE COMPARISON TABLES SUBMIT INITIALIZATION QUERY TO INTERNET SEARCH ENGINE PLACE LINKS IN PRIORITY QUEUE WITH AN INITIALLY LOW SECONDS SCORE REMOVE LOWEST SCORE LINK FROM PRIORITY QUEUE AND DOWNLOAD INFORMATION APPLY A* FUNCTION TO THE DATA RETRIEVED AND INSERT ALL LINKS IN THE PRIORITY QUEUE WITH THE SCORE HAS TIME LIMIT BEEN REACHED? NO HALT! PROCESS RETRIEVED DATA AS DIRECTED BY THE USER AND THE PURPOSE OF THE SEARCH. YES Simplified High Level Process Flow
14
Lessons Learned Use ANN for relevance function Investigate whether this problem is better solved using Hill-Climbing Use JAVA and distributed objects to break down the tasks further and to enable simultaneous processing. Many tasks can be performed at the same time.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.