Download presentation
Presentation is loading. Please wait.
Published byEvan Perkins Modified over 9 years ago
1
Learning to Share Meaning in a Multi-Agent System (Part I) Ganesh Padmanabhan
2
Article Williams, A.B., "Learning to Share Meaning in a Multi-Agent System ", Journal of Autonomous Agents and Multi- Agent Systems, Vol. 8, No. 2, 165-193, March 2004. (Most downloaded article in Journal) Williams, A.B., "Learning to Share Meaning in a Multi-Agent System ", Journal of Autonomous Agents and Multi- Agent Systems, Vol. 8, No. 2, 165-193, March 2004. (Most downloaded article in Journal)"Learning to Share Meaning in a Multi-Agent System " Journal of Autonomous Agents and Multi- Agent Systems"Learning to Share Meaning in a Multi-Agent System " Journal of Autonomous Agents and Multi- Agent Systems
3
Overview Introduction (part I) Introduction (part I) Approach (part I) Approach (part I) Evaluation (part II) Evaluation (part II) Related Work (part II) Related Work (part II) Conclusions and Future Work (part II) Conclusions and Future Work (part II) Discussion Discussion
4
Introduction One Common Ontology? Does that work? One Common Ontology? Does that work? If not, what issues do we face when agents have similar views of the world but different vocabularies? If not, what issues do we face when agents have similar views of the world but different vocabularies? Reconciling Diverse Ontologies so that Agents can communicate effectively when appropriate. Reconciling Diverse Ontologies so that Agents can communicate effectively when appropriate.
5
Diverse Ontology Paradgm: Questions Addressed “How do agents determine if they know the same semantic concepts?” “How do agents determine if they know the same semantic concepts?” “How do agents determine if their different semantic concepts actually have the same meaning?” “How do agents determine if their different semantic concepts actually have the same meaning?” “How can agents improve their interpretation of semantic concepts by recursively learning missing discriminating attributes?” “How can agents improve their interpretation of semantic concepts by recursively learning missing discriminating attributes?” “How do these methods affect the group performance at a given collective task?” “How do these methods affect the group performance at a given collective task?”
6
Ontologies and Meaning Operational Definitions Needed Operational Definitions Needed Conceptualization, ontology, universe of discourse, functional basis set, relational basis set, object, class, concept description, meaning, object constant, semantic concept, semantic object, semantic concept set, distributed collective memory Conceptualization, ontology, universe of discourse, functional basis set, relational basis set, object, class, concept description, meaning, object constant, semantic concept, semantic object, semantic concept set, distributed collective memory
7
Conceptualization All objects that an agent presumes to exist and their interrelationships with one another. All objects that an agent presumes to exist and their interrelationships with one another. Tuple: Universe of Discourse, Functional Basis Set, Relational Basis Set Tuple: Universe of Discourse, Functional Basis Set, Relational Basis Set
8
Ontology Specification of a conceptualization Specification of a conceptualization Mapping of language symbols to an agent’s conceptualization Mapping of language symbols to an agent’s conceptualization Terms used to name objects Terms used to name objects Functions to interpret objects Functions to interpret objects Relations in the agent’s world Relations in the agent’s world
9
Object Anything we can say something about Anything we can say something about Concrete or Abstract classes Concrete or Abstract classes Primitive or Composite Primitive or Composite Fictional or non-fictional Fictional or non-fictional
10
UOD and ontology “The difference between the UOD and the ontology is that the UOD are objects that exist but until they are placed in an agent’s ontology, the agent does not have a vocabulary to specify objects in the UOD.” “The difference between the UOD and the ontology is that the UOD are objects that exist but until they are placed in an agent’s ontology, the agent does not have a vocabulary to specify objects in the UOD.”
11
Forming a Conceptualization Agent’s first step at looking at the world. Agent’s first step at looking at the world. Declarative Knowledge Declarative Knowledge Declarative Semantics Declarative Semantics Interpretation Function maps an object in a conceptualization to language elements Interpretation Function maps an object in a conceptualization to language elements
12
Distributed Collective Memory
13
Approach Overview Assumptions Assumptions Agents’ use of supervised inductive learning to learn representations for their ontologies. Agents’ use of supervised inductive learning to learn representations for their ontologies. Mechanics of discovering similar semantic concepts, translation, and interpretation. Mechanics of discovering similar semantic concepts, translation, and interpretation. Recursive Semantic Context Rule Learning for improved performance. Recursive Semantic Context Rule Learning for improved performance.
14
Key Assumptions “Agents live in a closed world represented by distributed collective memory.” “Agents live in a closed world represented by distributed collective memory.” “The identity of the objects in this world are accessible to all agents and can be known by the agents.” “The identity of the objects in this world are accessible to all agents and can be known by the agents.” “Agents use a knowledge structure that can be learned using objects in the distributed collective memory.” “Agents use a knowledge structure that can be learned using objects in the distributed collective memory.” “The agents do not have any errors in their perception of the world even though their perceptions may differ.” “The agents do not have any errors in their perception of the world even though their perceptions may differ.”
15
Semantic Concept Learning Individual Learning, i.e. learning one’s own ontology Individual Learning, i.e. learning one’s own ontology Group Learning, i.e. one agent learning that another agent knows a particular concept Group Learning, i.e. one agent learning that another agent knows a particular concept
16
WWW Example Domain Web Page = specific semantic object Web Page = specific semantic object Groupings of Web Pages = semantic concept or class Groupings of Web Pages = semantic concept or class Analogous to Bookmark organization Analogous to Bookmark organization Words and HTML tags are taken to be boolean features. Words and HTML tags are taken to be boolean features. Web Page represented by boolean vector. Web Page represented by boolean vector. Concepts Concept Vectors Learner Semantic Concept Description (rules) Concepts Concept Vectors Learner Semantic Concept Description (rules)
17
Ontology Learning Supervised Inductive Learning Supervised Inductive Learning Output = Semantic Concept Descriptions (SCD) Output = Semantic Concept Descriptions (SCD) SCD are rules with a LHS and RHS etc. SCD are rules with a LHS and RHS etc. Object instances are discriminated based on tokens contained within sometimes resulting in “…a peculiar learned descriptor vocabulary.” Object instances are discriminated based on tokens contained within sometimes resulting in “…a peculiar learned descriptor vocabulary.” Certainty Value Certainty Value
18
Locating Similar Semantic Concepts 1) Agent queries another agent for a concept by showing it examples. 2) Second agent receives examples and uses its own conceptualization to determine if it knows the concept (K), maybe knows it (M), or doesn’t know it (D). 3) For cases, K and M, the second agent sends back examples of what it thinks is the concept that was queried. 4) First agent receives the examples, and interprets those using its own conceptualization to “verify” that they are talking about the same concept. 5) If verified, the querying agent then adds that the other agent knows its concept to its own knowledge base.
19
Concept Similarity Estimation Assuming two agents know a particular concept, it is feasible and probable given a large DCM, that the sets of concept defining objects differ completely. Assuming two agents know a particular concept, it is feasible and probable given a large DCM, that the sets of concept defining objects differ completely. Cannot simply assume that the target functions generated by each agent using supervised inductive learning from example will be the same. Cannot simply assume that the target functions generated by each agent using supervised inductive learning from example will be the same. Need to define other ways to estimate similarity. Need to define other ways to estimate similarity.
20
Concept Similarity Estimation Function Input: sample set of objects representing a concept in another agent Input: sample set of objects representing a concept in another agent Output: Knows Concept (K), Might Know Concept (M), Don’t Know Concept(D). Output: Knows Concept (K), Might Know Concept (M), Don’t Know Concept(D). Set of Objects Tries mapping set of objects to each of its concepts using description rules each concept receives an interpretation value interpretation value is compared with thresholds to make K,M, or D determination. Set of Objects Tries mapping set of objects to each of its concepts using description rules each concept receives an interpretation value interpretation value is compared with thresholds to make K,M, or D determination. Interpretation Value for one concept is the proportion of objects in the CBQ that were inferred to be this concept. Interpretation Value for one concept is the proportion of objects in the CBQ that were inferred to be this concept. Positive Interpretation Threshold = how often this concept description correctly determined an object in the training set to belong to this concept Positive Interpretation Threshold = how often this concept description correctly determined an object in the training set to belong to this concept Negative Interpretation Threshold Negative Interpretation Threshold
21
Group Knowledge Group Knowledge Individual Knowledge Individual Knowledge Verification Verification
22
Translating Semantic Concepts Same algorithm as for locating similar concepts in other agents. Same algorithm as for locating similar concepts in other agents. Two concepts determined to be the same, can be translated regardless of label in the ontologies. Two concepts determined to be the same, can be translated regardless of label in the ontologies. Difference: After verification, knowledge is stored as “Agent B knows my semantic concept X as Y.” Difference: After verification, knowledge is stored as “Agent B knows my semantic concept X as Y.”
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.