Learning to Share Meaning in a Multi-Agent System (Part I) Ganesh Padmanabhan.

Slides:



Advertisements
Similar presentations
1 University of Namur, Belgium PReCISE Research Center Using context to improve data semantic mediation in web services composition Michaël Mrissa (spokesman)
Advertisements

1. An Overview of Prolog.
Automating programming via concept mining, probabilistic reasoning over semantic knowledge base of SE domain by Max Talanov.
Using Schema Matching to Simplify Heterogeneous Data Translation Tova Milo, Sagit Zohar Tel Aviv University.
Models and Security Requirements for IDS. Overview The system and attack model Security requirements for IDS –Sensitivity –Detection Analysis methodology.
CS292 Computational Vision and Language Pattern Recognition and Classification.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Gimme’ The Context: Context- driven Automatic Semantic Annotation with CPANKOW Philipp Cimiano et al.
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Zebin Chen Week 7 (Nov. 9)
XML on Semantic Web. Outline The Semantic Web Ontology XML Probabilistic DTD References.
Information Modeling: The process and the required competencies of its participants Paul Frederiks Theo van der Weide.
Physical Symbol System Hypothesis
Information Extraction from HTML: General Machine Learning Approach Using SRV.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 22 Jim Martin.
Part I: Classification and Bayesian Learning
Introduction to Machine Learning Approach Lecture 5.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Learning Table Extraction from Examples Ashwin Tengli, Yiming Yang and Nian Li Ma School of Computer Science Carnegie Mellon University Coling 04.
Object-Oriented Analysis and Design
Knowledge Mediation in the WWW based on Labelled DAGs with Attached Constraints Jutta Eusterbrock WebTechnology GmbH.
Ontology Alignment/Matching Prafulla Palwe. Agenda ► Introduction  Being serious about the semantic web  Living with heterogeneity  Heterogeneity problem.
Name : Emad Zargoun Id number : EASTERN MEDITERRANEAN UNIVERSITY DEPARTMENT OF Computing and technology “ITEC547- text mining“ Prof.Dr. Nazife Dimiriler.
Agent Model for Interaction with Semantic Web Services Ivo Mihailovic.
Machine Learning CSE 681 CH2 - Supervised Learning.
Winter 2007SEG2101 Chapter 71 Chapter 7 Introduction to Languages and Compiler.
Dimitrios Skoutas Alkis Simitsis
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
1 Relational Databases and SQL. Learning Objectives Understand techniques to model complex accounting phenomena in an E-R diagram Develop E-R diagrams.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 9 Instance-Based.
©Ferenc Vajda 1 Semantic Grid Ferenc Vajda Computer and Automation Research Institute Hungarian Academy of Sciences.
A Context Model based on Ontological Languages: a Proposal for Information Visualization School of Informatics Castilla-La Mancha University Ramón Hervás.
Understanding User’s Query Intent with Wikipedia G 여 승 후.
Christoph F. Eick University of Houston Organization 1. What are Ontologies? 2. What are they good for? 3. Ontologies and.
Date : 2013/03/18 Author : Jeffrey Pound, Alexander K. Hudek, Ihab F. Ilyas, Grant Weddell Source : CIKM’12 Speaker : Er-Gang Liu Advisor : Prof. Jia-Ling.
Simultaneously Learning and Filtering Juan F. Mancilla-Caceres CS498EA - Fall 2011 Some slides from Connecting Learning and Logic, Eyal Amir 2006.
Building Simulation Model In this lecture, we are interested in whether a simulation model is accurate representation of the real system. We are interested.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.
Ontology Mapping in Pervasive Computing Environment C.Y. Kong, C.L. Wang, F.C.M. Lau The University of Hong Kong.
Of 33 lecture 1: introduction. of 33 the semantic web vision today’s web (1) web content – for human consumption (no structural information) people search.
HAITHAM BOU AMMAR MAASTRICHT UNIVERSITY Transfer for Supervised Learning Tasks.
Machine Learning Concept Learning General-to Specific Ordering
DeepDive Model Dongfang Xu Ph.D student, School of Information, University of Arizona Dec 13, 2015.
A Portrait of the Semantic Web in Action Jeff Heflin and James Hendler IEEE Intelligent Systems December 6, 2010 Hyewon Lim.
BOOTSTRAPPING INFORMATION EXTRACTION FROM SEMI-STRUCTURED WEB PAGES Andrew Carson and Charles Schafer.
Concept Learning and The General-To Specific Ordering
Concept mining for programming automation. Problem ➲ A lot of trivial tasks that could be automated – Add field Patronim on Customer page. – Remove field.
Automating programming via concept mining, probabilistic reasoning over semantic knowledge base of SE domain by Max Talanov.
OWL Web Ontology Language Summary IHan HSIAO (Sharon)
Machine Learning Lecture 1: Intro + Decision Trees Moshe Koppel Slides adapted from Tom Mitchell and from Dan Roth.
Web-Mining Agents: Transfer Learning TrAdaBoost R. Möller Institute of Information Systems University of Lübeck.
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
Semantic Interoperability in GIS N. L. Sarda Suman Somavarapu.
COMN 2111 THE MIND IN COMMUNICATION: COGNITION Lecture 8c.
Programming Languages Meeting 3 September 9/10, 2014.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Of 24 lecture 11: ontology – mediation, merging & aligning.
Ontology Engineering and Feature Construction for Predicting Friendship Links in the Live Journal Social Network Author:Vikas Bahirwani 、 Doina Caragea.
CSE573 Autumn /09/98 Machine Learning Administrative –Last topic: Decision Tree Learning Reading: 5.1, 5.4 Last time –finished NLP sample system’s.
Entity Relationship Diagrams
Schema translation and data quality Sven Schade
Defining A Formal Semantics For The Rosetta Specification Language
ITEC 3220A Using and Designing Database Systems
Knowledge Representation I (Propositional Logic)
ONTOMERGE Ontology translations by merging ontologies Paper: Ontology Translation on the Semantic Web by Dejing Dou, Drew McDermott and Peishen Qi 2003.
Presentation transcript:

Learning to Share Meaning in a Multi-Agent System (Part I) Ganesh Padmanabhan

Article Williams, A.B., "Learning to Share Meaning in a Multi-Agent System ", Journal of Autonomous Agents and Multi- Agent Systems, Vol. 8, No. 2, , March (Most downloaded article in Journal) Williams, A.B., "Learning to Share Meaning in a Multi-Agent System ", Journal of Autonomous Agents and Multi- Agent Systems, Vol. 8, No. 2, , March (Most downloaded article in Journal)"Learning to Share Meaning in a Multi-Agent System " Journal of Autonomous Agents and Multi- Agent Systems"Learning to Share Meaning in a Multi-Agent System " Journal of Autonomous Agents and Multi- Agent Systems

Overview Introduction (part I) Introduction (part I) Approach (part I) Approach (part I) Evaluation (part II) Evaluation (part II) Related Work (part II) Related Work (part II) Conclusions and Future Work (part II) Conclusions and Future Work (part II) Discussion Discussion

Introduction One Common Ontology? Does that work? One Common Ontology? Does that work? If not, what issues do we face when agents have similar views of the world but different vocabularies? If not, what issues do we face when agents have similar views of the world but different vocabularies? Reconciling Diverse Ontologies so that Agents can communicate effectively when appropriate. Reconciling Diverse Ontologies so that Agents can communicate effectively when appropriate.

Diverse Ontology Paradgm: Questions Addressed “How do agents determine if they know the same semantic concepts?” “How do agents determine if they know the same semantic concepts?” “How do agents determine if their different semantic concepts actually have the same meaning?” “How do agents determine if their different semantic concepts actually have the same meaning?” “How can agents improve their interpretation of semantic concepts by recursively learning missing discriminating attributes?” “How can agents improve their interpretation of semantic concepts by recursively learning missing discriminating attributes?” “How do these methods affect the group performance at a given collective task?” “How do these methods affect the group performance at a given collective task?”

Ontologies and Meaning Operational Definitions Needed Operational Definitions Needed Conceptualization, ontology, universe of discourse, functional basis set, relational basis set, object, class, concept description, meaning, object constant, semantic concept, semantic object, semantic concept set, distributed collective memory Conceptualization, ontology, universe of discourse, functional basis set, relational basis set, object, class, concept description, meaning, object constant, semantic concept, semantic object, semantic concept set, distributed collective memory

Conceptualization All objects that an agent presumes to exist and their interrelationships with one another. All objects that an agent presumes to exist and their interrelationships with one another. Tuple: Universe of Discourse, Functional Basis Set, Relational Basis Set Tuple: Universe of Discourse, Functional Basis Set, Relational Basis Set

Ontology Specification of a conceptualization Specification of a conceptualization Mapping of language symbols to an agent’s conceptualization Mapping of language symbols to an agent’s conceptualization Terms used to name objects Terms used to name objects Functions to interpret objects Functions to interpret objects Relations in the agent’s world Relations in the agent’s world

Object Anything we can say something about Anything we can say something about Concrete or Abstract  classes Concrete or Abstract  classes Primitive or Composite Primitive or Composite Fictional or non-fictional Fictional or non-fictional

UOD and ontology “The difference between the UOD and the ontology is that the UOD are objects that exist but until they are placed in an agent’s ontology, the agent does not have a vocabulary to specify objects in the UOD.” “The difference between the UOD and the ontology is that the UOD are objects that exist but until they are placed in an agent’s ontology, the agent does not have a vocabulary to specify objects in the UOD.”

Forming a Conceptualization Agent’s first step at looking at the world. Agent’s first step at looking at the world. Declarative Knowledge Declarative Knowledge Declarative Semantics Declarative Semantics Interpretation Function maps an object in a conceptualization to language elements Interpretation Function maps an object in a conceptualization to language elements

Distributed Collective Memory

Approach Overview Assumptions Assumptions Agents’ use of supervised inductive learning to learn representations for their ontologies. Agents’ use of supervised inductive learning to learn representations for their ontologies. Mechanics of discovering similar semantic concepts, translation, and interpretation. Mechanics of discovering similar semantic concepts, translation, and interpretation. Recursive Semantic Context Rule Learning for improved performance. Recursive Semantic Context Rule Learning for improved performance.

Key Assumptions “Agents live in a closed world represented by distributed collective memory.” “Agents live in a closed world represented by distributed collective memory.” “The identity of the objects in this world are accessible to all agents and can be known by the agents.” “The identity of the objects in this world are accessible to all agents and can be known by the agents.” “Agents use a knowledge structure that can be learned using objects in the distributed collective memory.” “Agents use a knowledge structure that can be learned using objects in the distributed collective memory.” “The agents do not have any errors in their perception of the world even though their perceptions may differ.” “The agents do not have any errors in their perception of the world even though their perceptions may differ.”

Semantic Concept Learning Individual Learning, i.e. learning one’s own ontology Individual Learning, i.e. learning one’s own ontology Group Learning, i.e. one agent learning that another agent knows a particular concept Group Learning, i.e. one agent learning that another agent knows a particular concept

WWW Example Domain Web Page = specific semantic object Web Page = specific semantic object Groupings of Web Pages = semantic concept or class Groupings of Web Pages = semantic concept or class Analogous to Bookmark organization Analogous to Bookmark organization Words and HTML tags are taken to be boolean features. Words and HTML tags are taken to be boolean features. Web Page represented by boolean vector. Web Page represented by boolean vector. Concepts  Concept Vectors  Learner  Semantic Concept Description (rules) Concepts  Concept Vectors  Learner  Semantic Concept Description (rules)

Ontology Learning Supervised Inductive Learning Supervised Inductive Learning Output = Semantic Concept Descriptions (SCD) Output = Semantic Concept Descriptions (SCD) SCD are rules with a LHS and RHS etc. SCD are rules with a LHS and RHS etc. Object instances are discriminated based on tokens contained within sometimes resulting in “…a peculiar learned descriptor vocabulary.” Object instances are discriminated based on tokens contained within sometimes resulting in “…a peculiar learned descriptor vocabulary.” Certainty Value Certainty Value

Locating Similar Semantic Concepts 1) Agent queries another agent for a concept by showing it examples. 2) Second agent receives examples and uses its own conceptualization to determine if it knows the concept (K), maybe knows it (M), or doesn’t know it (D). 3) For cases, K and M, the second agent sends back examples of what it thinks is the concept that was queried. 4) First agent receives the examples, and interprets those using its own conceptualization to “verify” that they are talking about the same concept. 5) If verified, the querying agent then adds that the other agent knows its concept to its own knowledge base.

Concept Similarity Estimation Assuming two agents know a particular concept, it is feasible and probable given a large DCM, that the sets of concept defining objects differ completely. Assuming two agents know a particular concept, it is feasible and probable given a large DCM, that the sets of concept defining objects differ completely. Cannot simply assume that the target functions generated by each agent using supervised inductive learning from example will be the same. Cannot simply assume that the target functions generated by each agent using supervised inductive learning from example will be the same. Need to define other ways to estimate similarity. Need to define other ways to estimate similarity.

Concept Similarity Estimation Function Input: sample set of objects representing a concept in another agent Input: sample set of objects representing a concept in another agent Output: Knows Concept (K), Might Know Concept (M), Don’t Know Concept(D). Output: Knows Concept (K), Might Know Concept (M), Don’t Know Concept(D). Set of Objects  Tries mapping set of objects to each of its concepts using description rules  each concept receives an interpretation value  interpretation value is compared with thresholds to make K,M, or D determination. Set of Objects  Tries mapping set of objects to each of its concepts using description rules  each concept receives an interpretation value  interpretation value is compared with thresholds to make K,M, or D determination. Interpretation Value for one concept is the proportion of objects in the CBQ that were inferred to be this concept. Interpretation Value for one concept is the proportion of objects in the CBQ that were inferred to be this concept. Positive Interpretation Threshold = how often this concept description correctly determined an object in the training set to belong to this concept Positive Interpretation Threshold = how often this concept description correctly determined an object in the training set to belong to this concept Negative Interpretation Threshold Negative Interpretation Threshold

Group Knowledge Group Knowledge Individual Knowledge Individual Knowledge Verification Verification

Translating Semantic Concepts Same algorithm as for locating similar concepts in other agents. Same algorithm as for locating similar concepts in other agents. Two concepts determined to be the same, can be translated regardless of label in the ontologies. Two concepts determined to be the same, can be translated regardless of label in the ontologies. Difference: After verification, knowledge is stored as “Agent B knows my semantic concept X as Y.” Difference: After verification, knowledge is stored as “Agent B knows my semantic concept X as Y.”