Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001.

Similar presentations


Presentation on theme: "Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001."— Presentation transcript:

1 Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

2 2 Content 4 Advantages of Agent Building Toolkits 4 Benchmark for ABTs 4 Information Gathering System 4 Conclusions and Future Work

3 3 Multi-Agent Systems 4 Dynamic environments –# of agents can change –agents can be specialized –resource type and location can change 4 Scalability 4 Modularity

4 4 MAS Problem Domains Application DomainAgent Domain Example: Retrieve information Compute relevance Example: Find other agents Compose messages Process messages

5 5 Agent Building Toolkits 4 Libraries for agent specific domain 4 GUI for rapid MAS creation 4 Runtime analysis for agencies 4 Standardized communication –KQML (Knowledge Query Manipulation Language) –FIPA ACL (Agent Communication Language)

6 6 Selection of an ABT 4 Large # of ABTs (http://www.agentbuilder.com/AgentTools) 4 Different concepts 4 Different standards 4 Different performance => Benchmark for ABTs

7 7 Benchmark for ABTs Feature Quality of ABT Performance of constructed MAS + Benchmark Result

8 8 Feature Quality 4 Comprises several categories –Coordination (assignment, negotiation) –Communication (standard) –Connectivity (name service, yellow pages) –Scalability (# of agents per JVM) –Usability (documentation, examples)

9 9 Feature Quality: Categories 4 Each category comprises several parameters (e.g. assignment, negotiation) 4 Each parameter p k is assigned value 0..4 4 Category value c i = Σ p k 4 Category sum f s = Σ w i c i 4 Feature Quality Q = f s / f max

10 10 MAS MAS Performance 4 User requests information 4 Execution time of MAS taken 4 # of agents and resources in MAS differ User agent Resource agent Information retrieval Trigger

11 11 Benchmark System Architecture JVM 2 JVM 1 Benchmark App. AgentStarterFacilitator Resource mInterface n Interface 1Resource 1

12 12 Benchmark Computation B = (w * P) / Q Feature Quality Q Weighted Performance P + Benchmark Result B

13 13 Tested Toolkits (1) Zeus –GUI for agent development –rulebase and actions integrated –good support –good documentation –difficulties w/ large # of agents or resources –different timing concept

14 14 Tested Toolkits (2) FIPA-OS –runtime analysis tools –implements FIPA standards –rulebase optional –good documentation –concept of actions easy to learn –poor scalability

15 15 Tested Toolkits (3) JADE –runtime analysis tools –good documentation –difficulties with facilitator requests –apparently very performant

16 16 Benchmark Results

17 17 Information Gathering System Goal: more relevant information Idea –agents are connected to search engines –relevance of results is computed –user provides feedback on relevance Web Browser Interface Agent Resource Agent AltaVista Excite Google

18 18 Relevance Computation Vector Space Representation –stop words (on, and, etc.) removed –words reduced to stems –frequency of stems in document set computed –weights for stems computed using TF-IDF (term frequency - inverse document frequency) –weights represent document

19 19 Relevance: Example (1) Document 1: A simple example! Document 2: Another example! 4 Step 1: Remove stop words –doc 1: simple example –doc 2: another example

20 20 Relevance: Example (2) 4 Step 2: Create Stems –doc 1: simpl exampl –doc 2: another exampl –list of stems: simpl exampl another 4 Step 3: Frequency f ik of stem k in doc i –f ik = 1

21 21 Relevance: Example (3) 4 Step 4: Computing weights –Inverse doc frequency IDF i = log (N / d i ) –N # of docs, d i # of docs containing stem i –w ik = f ik * IDF k –IDF simpl = log(2/1) = log 2 = IDF another –IDF exampl = log(2/2) = 0

22 22 Relevance: Example (4) 4 Step 5: Create Vectors –list of stems: [simpl, exampl, another] –doc 1: [log2, 0, 0] –doc 2: [0, 0, log2]

23 23 Relevance: Proximity 4 Distance of vectors indicates relevance 4 Prototype computes cosine between vectors

24 24 Relevance: Feedback 4 Query is vector itself 4 Feedback positive: weights of doc added 4 Feedback negative: weights of doc subtracted 4 IGS saves weights and compares queries each time

25 25 Feedback: Example (1) 4 Stems: [weather, waterloo, station, ontario] 4 Query: [weather, waterloo] 4 Weights: –query: [0.5, 0.5, 0, 0] –doc 1: [0.3, 0, 0.7, 0] –doc 2: [0, 0.6, 0, 0.4] 4 IGS presents document 1

26 26 Feedback: Example (2) 4 User states result is relevant: –query: [0.8, 0.5, 0.7, 0] –normalized: [0.4, 0.25, 0.35, 0] 4 Next time [weather, waterloo], updated query weights are used

27 27 Future Work 4 Benchmark gathers information on toolkits automatically, using agents 4 Resource agents connect to other information resources (databases) 4 Additional layer to process meta- information

28 28 Conclusions (1) 4 Agent Building Tools support developers significantly 4 Zeus is easier to start with, some changes are difficult (protocols) 4 Benchmark can reduce evaluation time 4 Some problems with toolkit might not be revealed during benchmark process

29 29 Conclusions (2) 4 Information gathering system succesfully deals with changing environment 4 Feedback on single document can result in tedious learning phase


Download ppt "Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001."

Similar presentations


Ads by Google