International Workshop on Multi-Agent Systems, 1997 Y. Xiang Department of Computer Science University of Regina Regina, Saskatchewan, Canada Multiagent.

Slides:



Advertisements
Similar presentations
Slide 1 of 18 Uncertainty Representation and Reasoning with MEBN/PR-OWL Kathryn Blackmond Laskey Paulo C. G. da Costa The Volgenau School of Information.
Advertisements

CS188: Computational Models of Human Behavior
CPSC 422, Lecture 11Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Jan, 29, 2014.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
BAYESIAN NETWORKS Ivan Bratko Faculty of Computer and Information Sc. University of Ljubljana.
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011.
Bayesian Network and Influence Diagram A Guide to Construction And Analysis.
Bayesian Networks CSE 473. © Daniel S. Weld 2 Last Time Basic notions Atomic events Probabilities Joint distribution Inference by enumeration Independence.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Lauritzen-Spiegelhalter Algorithm
BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Exact Inference in Bayes Nets
Knowledge Representation
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Introduction of Probabilistic Reasoning and Bayesian Networks
Overview of Inference Algorithms for Bayesian Networks Wei Sun, PhD Assistant Research Professor SEOR Dept. & C4I Center George Mason University, 2009.
Causal and Bayesian Network (Chapter 2) Book: Bayesian Networks and Decision Graphs Author: Finn V. Jensen, Thomas D. Nielsen CSE 655 Probabilistic Reasoning.
Junction Trees: Motivation Standard algorithms (e.g., variable elimination) are inefficient if the undirected graph underlying the Bayes Net contains cycles.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
On the Role of MSBN to Cooperative Multiagent Systems By Y. Xiang and V. Lesser Presented by: Jingshan Huang and Sharon Xi.
Regulatory Network (Part II) 11/05/07. Methods Linear –PCA (Raychaudhuri et al. 2000) –NIR (Gardner et al. 2003) Nonlinear –Bayesian network (Friedman.
Probabilistic Reasoning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 14 (14.1, 14.2, 14.3, 14.4) Capturing uncertain knowledge Probabilistic.
Global Approximate Inference Eran Segal Weizmann Institute.
An Introduction to Bayesian Networks for Multi-Agent Systems By Vijay Sargunar.M.M.
December Marginal and Joint Beliefs in BN1 A Hybrid Algorithm to Compute Marginal and Joint Beliefs in Bayesian Networks and its complexity Mark.
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6 [P]: Reasoning Under Uncertainty Sections.
1 Department of Computer Science and Engineering, University of South Carolina Issues for Discussion and Work Jan 2007  Choose meeting time.
Bayesian Networks Alan Ritter.
CPSC 422, Lecture 14Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14 Feb, 4, 2015 Slide credit: some slides adapted from Stuart.
Maximal Independent Set Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
A Differential Approach to Inference in Bayesian Networks - Adnan Darwiche Jiangbo Dang and Yimin Huang CSCE582 Bayesian Networks and Decision Graphs.
Semantically-Linked Bayesian Networks: A Framework for Probabilistic Inference Over Multiple Bayesian Networks PhD Dissertation Defense Advisor: Dr. Yun.
Quiz 4: Mean: 7.0/8.0 (= 88%) Median: 7.5/8.0 (= 94%)
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
Soft Computing Lecture 17 Introduction to probabilistic reasoning. Bayesian nets. Markov models.
Bayesian Learning By Porchelvi Vijayakumar. Cognitive Science Current Problem: How do children learn and how do they get it right?
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
第十讲 概率图模型导论 Chapter 10 Introduction to Probabilistic Graphical Models
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
Introduction to Bayesian Networks
Bayesian Nets and Applications. Naïve Bayes 2  What happens if we have more than one piece of evidence?  If we can assume conditional independence 
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems Presented By: Yasser EL-Manzalawy.
SCALABLE INFORMATION-DRIVEN SENSOR QUERYING AND ROUTING FOR AD HOC HETEROGENEOUS SENSOR NETWORKS Paper By: Maurice Chu, Horst Haussecker, Feng Zhao Presented.
Lecture 29 Conditional Independence, Bayesian networks intro Ch 6.3, 6.3.1, 6.5, 6.5.1,
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Pattern Recognition and Machine Learning
Introduction on Graphic Models
Today Graphical Models Representing conditional dependence graphically
Belief propagation with junction trees Presented by Mark Silberstein and Yaniv Hamo.
Belief Networks Kostas Kontogiannis E&CE 457. Belief Networks A belief network is a graph in which the following holds: –A set of random variables makes.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) Nov, 13, 2013.
Belief Networks CS121 – Winter Other Names Bayesian networks Probabilistic networks Causal networks.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Artificial Intelligence Chapter 19 Reasoning with Uncertain Information Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Bayesian Nets and Applications Next class: machine learning C. 18.1, 18.2 Homework due next class Questions on the homework? Prof. McKeown will not hold.
Integrative Genomics I BME 230. Probabilistic Networks Incorporate uncertainty explicitly Capture sparseness of wiring Incorporate multiple kinds of data.
Reasoning Under Uncertainty: Belief Networks
Qian Liu CSE spring University of Pennsylvania
Artificial Intelligence Chapter 19
Pattern Recognition and Image Analysis
Biointelligence Lab School of Computer Sci. & Eng.
Belief Networks CS121 – Winter 2003 Belief Networks.
Generalized Diagnostics with the Non-Axiomatic Reasoning System (NARS)
Presentation transcript:

International Workshop on Multi-Agent Systems, 1997 Y. Xiang Department of Computer Science University of Regina Regina, Saskatchewan, Canada Multiagent Distributed Interpretation With Multiply Sectioned Bayesian Netowrks

Recent Advances Some slides containing formal results are labeled with * in the title fields. These slides may be skipped if only an intuitive tutorial is desired.

Common approaches in MAS nLogic-based: Wooldridge 94, Rao and Georgeff 95, Halpern et al. 96. nGame-theoretic: Rosenschein and Zlotkin 94. nEconomic: Wellman 92. nDecision-theoretic: Gmytrasiewicz and Durfee 95. nTruth maintenance (default-reasoning)-based: Mason and Johnson 89, Huhns and Bridgeland 91. nDistributed search-based: Yokoo et al. 92.

MAS Area of Focus nTask: distributed interpretation u Producing higher level descriptions of the environment from distributed sensing without centralizing the evidence. nExamples of distributed interpretation systems: u Sensor networks. u Medical diagnosis by multiple specialists. u Trouble shooting complex artifacts. u Distributed image interpretation.

Recent Advance nFoundation:a probabilistic framework for multiagent distributed interpretation based on multiply sectioned Bayesian networks (MSBNs). nAdvance: u Distributed representation of uncertainty knowledge that is consistent with probability theory. u Distributed inference that ensures global consistency. u Agents can be developed by independent designers. u Internal structure/knowledge of agents remains private. nControversial aspects: u Are cooperative MASs important? u Is homogeneous knowledge representation necessary?

Foundations

Background nCommon approaches for distributed interpretation are based on logic (e.g., blackboard) or default reasoning (e.g., DATMS and DTMS). u Logic-based approaches do not have coherent mechanism to deal with uncertain knowledge. u Default reasoning treats uncertain knowledge as “believed until there is reason to believe otherwise” and not as “believed to a certain degree”. u However, decisions often involve tradeoffs and comparison of strength of belief on states of world or outcomes of actions is thus necessary.

Background nSubstantial progress has been made in uncertain inference using Bayesian networks (BNs) [Pearl 88]. u Dependencies of domain variables are represented by a DAG. u Strength of dependencies is quantified by an associated jpd. u The jpd is interpreted as the degree of belief of an agent. u Many effective inference algorithms have been developed. nA single-agent paradigm is commonly assumed: u A single processor accesses a single global BN, updates the jpd as evidence becomes available, and answers queries. nThis research advances the DTMS approach with a representation of agent’s degree of belief consistent with the probability theory, and advances the single-agent BN approach with a multiagent paradigm and distributed inference algorithm.

*Bayesian Networks (BN) nDefi: A BN is a triplet (N,D,P) where u N is a set of random variables, u D is a directed acyclic graph (DAG) whose nodes are labeled by elements of N, u P is a jpd over N specified by probability distributions of each node x in D conditioned on its parents parn(x) in D. nD expresses dependency relations among elements of N. u A variable is independent of its non-descendents given its parents. u Hence P can be expressed as P(N) =  x  N P(x | parn(x)).

A Trivial Example BN [L & S 88]

Multiply Sectioned Bayesian Networks (MSBNs) nSingle-agent oriented MSBN [Xiang et al., CI93, AIM93]: u A set of Bayesian subnets that collectively define a BN. u Interface b/w subnets renders them conditionally independent. u Top level structure is a hypertree. u Compiled into a linked junction forest (LJF) for inference. u Coherent inference operations are defined for a LJF. A MSBN (left) and its LJF (right)

*The d-sepset: Interface b/w Subnets in a MSBN nDefi: Let D i =(N i,E i ) (i=1,2) be two DAGs such that their union D is a DAG. I=N 1  N 2 is a d-sepset b/w D 1 and D 2 if for every x  I with its parents parn(x) in D, either parn(x)  N 1 or parn(x)  N 2. D is said to be sectioned into {D 1,D 2 }. nTheorem: Let a DAG D=(N,E) be sectioned into {D 1,…,D k } and I ij =N i  N j be the d-sepset b/w D i and D j. Then for each i,  j I ij d-separates [Pearl88] N i \  j I ij from N\N i. nSemantics: If D represents the dependence relations among elements of N, then d-sepset ensures that variables in a subnet are independent of other variables given the d-sepsets of the subnet.

*Hypertree MSDAG: Top Level Structure of a MSBN nDefi: Let D be the union of D i (i=1,…,n) where each D i is a connected DAG. D is a hypertree MSDAG if it is a DAG built by the following procedure: u Start with an empty graph. Recursively add a DAG D i called a hypernode, to the existing MSDAG subject to the constraints: u [d-sepset] For each D j (j<k), I jk =N j  N k is a d-sepset when the two DAGs are isolated. u [Local covering] There exists D i (I<k) such that, for each D j (j<k;j  i), I jk  N i. For such D i, I ik is called the hyperlink b/w hypernodes D i and D k. nSemantics: Each hyperlink renders the two parts of the MSBN that it connects conditionally independent.

*Compilation of a MSBN into a Linked Junction Forest nMajor steps in compiling a LJF u Convert each DAG (hypernode) into a chordal graph such that all dependence relations are preserved. u Express the chordal graph as a junction tree (JT) of cliques. u Convert each hyperlink (d-sepset) into linkages, each of which is a subset of the d-sepest. u Convert conditional probability distributions in each subnet to belief tables of the corresponding JT and d-sepset. u Let B(N i ) be the belief table on a hypernode and B(I j ) be the belief table on a hyperlink. The joint system belief (JSB) of a LJF is  i B(N i ) /  j B(I j ). nTheorem: JSB of a LJF is equivalent to jpd of its MSBN. nSince each subnet is organized as a tree, a LJF is an equivalent but more effective data structure for inference computation.

*Inference in a Single-agent Oriented MSBN nInference using LJF of a MSBN u Queries of a single user are focused on a single JT at a time, where a query has the form “what is the probability of event A given that B has occured?” u Evidence can be entered incrementally to the JT and queries are answered by local computation only. u As the user shifts attention to another JT, belief propagation is performed only along the hyperpath to the target JT in the hypertree. u Queries can then be entered at the target JT as above. nTheorem: After finite times of attention shifts, answers to queries computed locally are idential to what would be obtained from an equivalent homogeneous BN.

What Can be Gained y Using a MSBN? nIf a domain consists of loosely coupled subdomains, then... u Knowledge acquisition is natural and modular: Subnet can be built one at a time. u Inference requires only local computation. Attention shift uses only hyperpath. Hence computation is more efficient u Answers to queries are the same as from a homogeneous BN. Structure of a general MSBN (left) and the corresponding hypertree

Why Using MSBNs for Distributed Interpretation? nRepresentation of MSBNs is modular. nInference in MSBNs is coherent. nThe framework of MSBNs is general.

Major Issues in Extending MSBNs to MASs nWhat is the semantics of a subnet? nWhat is the semantics of the jpd of the MSBN? nHow do we build the system by multiple agent developers? nHow do we ensure a correct overall structure while protecting the know-how of each developer? nHow do we ensure a coherent inference? nWhat difference does a multi-agent MSBN make relative to a MAS not organized into a MSBN?

What is the semantics of a subnet? nIn a single-agent MSBN: u The MSBN represents multiple perspectives of a domain hold by a single agent. u Each subnet represents one such perspective. nIn a multi-agent MSBN [Xiang, AIJ96] u The MSBN represents multiple agents in a domain, each of which holds one perspective. u Each subnet represents one agent's perspective of the domain. An agent model

What is the semantics of the jpd in a MSBN? nIf the distribution of each subnet represents one agent's belief, whose belief does the jpd of the MSBN represent? nExample: a computer system. u It processes information coherently as a whole. u Its components are supplied by different vendors. nObservation u As long as vendors follow a protocol in designing component interfaces, the system functions as if it follows a single will.

What is the semantics of the jpd in a MSBN? nAnother example: a patient sees a doctor. u Patient tells what doctor needs to know for diagnosis. u After doctor reaches a diagnosis, he prescribes a therapy which patient follows. nObservation u Doctor does not experience symptoms. u Patient does not understand how diagnosis is reached. u A coherent belief is demonstrated on symptoms (used by doctor to reach diagnosis) and the diagnosis (therapy is followed by patient).

What is the semantics of the jpd in a MSBN? nDue to the way the jpd of a MSBN is defined, there exists a unique jpd [Xiang, AIJ96] such that u its marginalization to each subnet is identical to the distribution of the subnet; u adjacent subnets are conditionally independent given their interface. nImplication: If agents are (1) cooperative, (2) independent conditioned on interface, and (3) initially consistent, then the jpd of the MSBN represents a unique collective belief u identical to each agent's belief within its subdomain, u and supplemental to its belief outside its subdomain.

How do we ensure coherent inference? nIssue arising: u In a single-agent MSBN, evidence is entered one subnet at a time. u In a multi-agent MSBN, evidence are entered asynchronously at multiple subnets in parallel. nSolution: extended inference operations [Xiang, AIJ96]. CommunicateBelief CollectNewBeliefDistributeBelief

How do we ensure coherent inference? nCollectNewBelief: Initiated at an agent to activate an inward propagation towards the agent.

How do we ensure coherent inference? nDistributeBelief: Initiated at an agent to activate an outward propagation.

How do we ensure coherent inference? nTheorem: After CommunicateBelief, answers to queries from any agent is identical to what is obtained from an equivalent homogeneous BN. nImplication: Distribution causes no loss of coherence. nComplexity of inference computation u Inference at one agent: O(k 2^m), where m is the maximal size of a clique and k is the number of cliques in the JT. u CommunicateBelief: O(t g k 2**m), where t is the number of agents and g is the maximal number of linkages in a hyperlink.

How do we ensure coherent inference? nTheorem: Between successive CommunicateBeliefs, answers to queries from any agent X is identical to what would be obtained in an equivalent homogeneous BN where only evidence in the bottom are entered. CommunicateBelief Evidence to A Evidence to W Evidence to X Evidence to Z Evidence to Y... t t Evidence to A... Evidence to W Evidence to X Evidence to Y Evidence to Z

How to build a MSBN by multiple developers? nHow to ensure system coherence without disclosing structure and distribution of individual subnets? nIt is possible if u the interface of each subnet renders it conditionally independent of others; and u adjacent agents agree on an initial belief of their interface. nSolution [Xiang, AI96]: u A single integrater with the knowledge of agents interface puts agents into a hypertree. u Agents negotiate to achieve initial belief on interface.

Global structure vs each agent's know-how nStructures of subnets in a MSBN collectively define a directed acyclic graph (DAG). nLocal acyclicity doesn't warrant global acyclicity. nAlgorithms to test acyclicity based on topological sorting are well known. However, a central representation of the graph is assumed.

Global structure vs each agent's know-how nIf each subnet’s structure is unknown to others, how can we ensure acyclicity of the MSBN? nA distributed algorithm has been developed that has the following features [Xiang, FLAIRS96]: u Each agent provides only info on whether a shared node has a parent or a child in its DAG, plus some flag info. u The acyclicity of the MSBN can be correctly determined.

What if agents are not organized into a MSBN? nBelief propagation in a MSBN proceeds along hypertree in a regulated fashion. nCircular evidence propagation causes no problem if agents are logical. nBut it causes false belief if agent’s knowledge is uncertain. nWhat happens otherwise? nNot knowing message from Y is based on evidence originated from itself, Z counts the same info twice. Agent X Agent W Agent Y Agent Z

Prospects

Prospects for distributed interpretation nA framework is provided for tasks that rely on uncertain knowledge and distributed inference without sacrifice of coherence in the interpretation. nThe framework protects individual agent developer’s know-how and hence encourages cooperation of many agent developers in building MASs in large and complex domains. u Ex. Systems for trouble-shooting complex artifacts.

Prospects for distributed interpretation nThe framework suggests standardization of agent interfaces in large and complex domains where knowledge sources are naturally distributed and separately owned. nThe framework suggests future research directions: u Dynamic formulation of multiagent MSBNs. u Incorporation of decision making. u Incorporation of temporal inference.