On the Role of MSBN to Cooperative Multiagent Systems By Y. Xiang and V. Lesser Presented by: Jingshan Huang and Sharon Xi.

Slides:



Advertisements
Similar presentations
Completeness and Expressiveness
Advertisements

The Capacity of Wireless Networks Danss Course, Sunday, 23/11/03.
1 Decomposing Hypergraphs with Hypertrees Raphael Yuster University of Haifa - Oranim.
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011.
1 Chapter 5 Belief Updating in Bayesian Networks Bayesian Networks and Decision Graphs Finn V. Jensen Qunyuan Zhang Division. of Statistical Genomics,
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Bayesian Networks, Winter Yoav Haimovitch & Ariel Raviv 1.
Introduction to Graphs
Lauritzen-Spiegelhalter Algorithm
Week 11 Review: Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution.
Statistical Methods in AI/ML Bucket elimination Vibhav Gogate.
Chapter 4 Probability and Probability Distributions
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Representing Relations Using Matrices
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Induction and recursion
From: Probabilistic Methods for Bioinformatics - With an Introduction to Bayesian Networks By: Rich Neapolitan.
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
Junction Trees: Motivation Standard algorithms (e.g., variable elimination) are inefficient if the undirected graph underlying the Bayes Net contains cycles.
From Variable Elimination to Junction Trees
Introduction to Inference for Bayesian Netoworks Robert Cowell.
Applied Discrete Mathematics Week 12: Trees
A Differential Approach to Inference in Bayesian Networks - Adnan Darwiche Jiangbo Dang and Yimin Huang CSCE582 Bayesian Networks and Decision Graph.
Global Approximate Inference Eran Segal Weizmann Institute.
An Introduction to Bayesian Networks for Multi-Agent Systems By Vijay Sargunar.M.M.
Belief Propagation, Junction Trees, and Factor Graphs
1 Background Information for the Pumping Lemma for Context-Free Languages Definition: Let G = (V, T, P, S) be a CFL. If every production in P is of the.
A Differential Approach to Inference in Bayesian Networks - Adnan Darwiche Jiangbo Dang and Yimin Huang CSCE582 Bayesian Networks and Decision Graphs.
1 Inference Algorithm for Similarity Networks Dan Geiger & David Heckerman Presentation by Jingsong Wang USC CSE BN Reading Club Contact:
Rosen 1.6. Approaches to Proofs Membership tables (similar to truth tables) Convert to a problem in propositional logic, prove, then convert back Use.
Induction and recursion
Relations Chapter 9.
Some Surprises in the Theory of Generalized Belief Propagation Jonathan Yedidia Mitsubishi Electric Research Labs (MERL) Collaborators: Bill Freeman (MIT)
Zvi Kohavi and Niraj K. Jha 1 Capabilities, Minimization, and Transformation of Sequential Machines.
Induction and recursion
International Workshop on Multi-Agent Systems, 1997 Y. Xiang Department of Computer Science University of Regina Regina, Saskatchewan, Canada Multiagent.
Probabilistic Graphical Models David Madigan Rutgers University
Undirected Models: Markov Networks David Page, Fall 2009 CS 731: Advanced Methods in Artificial Intelligence, with Biomedical Applications.
1 2. Independence and Bernoulli Trials Independence: Events A and B are independent if It is easy to show that A, B independent implies are all independent.
Chapter 9. Chapter Summary Relations and Their Properties n-ary Relations and Their Applications (not currently included in overheads) Representing Relations.
1 Inferring structure to make substantive conclusions: How does it work? Hypothesis testing approaches: Tests on deviances, possibly penalised (AIC/BIC,
Based on slides by Y. Peng University of Maryland
Solving Bayesian Decision Problems: Variable Elimination and Strong Junction Tree Methods Presented By: Jingsong Wang Scott Langevin May 8, 2009.
Mathematical Induction
Reading and Writing Mathematical Proofs Spring 2015 Lecture 4: Beyond Basic Induction.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
LAC group, 16/06/2011. So far...  Directed graphical models  Bayesian Networks Useful because both the structure and the parameters provide a natural.
Daphne Koller Message Passing Belief Propagation Algorithm Probabilistic Graphical Models Inference.
CompSci 102 Discrete Math for Computer Science March 1, 2012 Prof. Rodger Slides modified from Rosen.
On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems Presented By: Yasser EL-Manzalawy.
CS 103 Discrete Structures Lecture 13 Induction and Recursion (1)
Chapter 9. Chapter Summary Relations and Their Properties n-ary Relations and Their Applications (not currently included in overheads) Representing Relations.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
Mathematical Induction Section 5.1. Climbing an Infinite Ladder Suppose we have an infinite ladder: 1.We can reach the first rung of the ladder. 2.If.
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Mathematical Induction
Copyright © Cengage Learning. All rights reserved. CHAPTER 8 RELATIONS.
CS104:Discrete Structures Chapter 2: Proof Techniques.
CompSci 102 Discrete Math for Computer Science March 13, 2012 Prof. Rodger Slides modified from Rosen.
R-customizers Goal: define relation between graph and its customizers, study domains of adaptive programs, merging of interface class graphs.
Today Graphical Models Representing conditional dependence graphically
Chapter 5. Section 5.1 Climbing an Infinite Ladder Suppose we have an infinite ladder: 1.We can reach the first rung of the ladder. 2.If we can reach.
Approximation Algorithms based on linear programming.
رياضيات متقطعة لعلوم الحاسب MATH 226. Chapter 10.
Relations Chapter 9 Copyright © McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill.
CSCE350 Algorithms and Data Structure
Induction and recursion
Locality In Distributed Graph Algorithms
Presentation transcript:

On the Role of MSBN to Cooperative Multiagent Systems By Y. Xiang and V. Lesser Presented by: Jingshan Huang and Sharon Xi

Motivation  A common task in multiagent systems Agents need to estimate the state of an uncertain domain so that they can act accordingly  Constraints  Each agent only has partial knowledge about the domain  Only local observations are available  Limited amount of communication Solution ?

Motivation --- cont.  MSBNs (Multiply Sectioned Bayesian Networks) provide a solution  An effective and exact framework  But a set of constraints exist

Structure of the presentation  Introduction of the background knowledge  Detail information about the constraints  A small set of high level choices  How those choices logically imply all the constraints

Background knowledge Definition of MSBN A MSBN M is a triplet (V, G, P):  V is the union domain from all agents  G is the structure, i.e., hypertree MSDAG  Hypertree structure  D-sepset concept  P is the JPD (Joint Probability Distribution) over G P( x | π (x)) is assigned to exactly one occurrence of x, and uniform potential to all other occurrences

Background knowledge --- cont.  Definition of hypertree structure  Each node (hypernode) is a DAG  Each link (hyperlink) between two nodes is an non-empty interface  RIP (Running Intersection Property)  D-sepset concept  An interface I is a d-sepset if every x ∈ I is a d-sepnode  A node x contained in more than one subgraph with its parents π (x) is a d-sepnode if there exists at least one subgraph that contains π (x) d bc a e d bc a b a e a,b Original Graph Hypertree Hypernode Hyperlink

Background knowledge --- cont. Some useful definitions  Communication Graph In a graph with n hypernodes, associate each node with an agent A i and label it by V i Connect each pair of nodes V i and V j by a link labeled by if  Junction Graph A triplet (V, Ω, E) V is a non-empty set (the generating set) Ω is a subset of 2 V s.t., each element Q is called a cluster E is defined as

Background knowledge --- cont.  Cluster Graph Let (V, Ω,E) be a junction graph and, then (V, Ω,E’) is a cluster graph over V  Degenerate Loop and Nondegenerate Loop Let ρ be a loop in a cluster graph H. If there exists a separator S on ρ that is contained in every other separator on ρ, then ρ is a degenerate loop. Otherwise, ρ is a nondegenerate loop

Background knowledge --- cont. d,e b,c,d d,f d,g d d dd d d (a) Strong Degenerate Loop d,e,i b,c,d,i d,f,h d,g,h d,i d d,h d (b) Weak Degenerate Loop a,b b,c,d a,e c,e b a e c (c) Strong Nondegenerate Loop a,b,f b,c,d,f a,e,f c,e,f b,f a,f e,f c,f (d) Week Nondegenerate Loop

Structure of the presentation  Introduction of the background knowledge  Detail information about the constraints  A small set of high level choices  How those choices logically imply all the constraints

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN

Structure of the presentation  Introduction of the background knowledge  Detail information about the constraints  A small set of high level choices  How those choices logically imply all the constraints

High Level Choices (Basic Commitments)  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’

Structure of the presentation  Introduction of the background knowledge  Detail information about the constraints  A small set of high level choices  How those choices logically imply all the constraints

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN Five Basic Commitments  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’ Proof of the logical implication

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN Five Basic Commitments  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN Five Basic Commitments  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’

Lemma 9 : Let s be a strictly positive initial state of Mas3. There exists an infinite set S. Each element s’ ∈ S is an initial state of Mas3 identical to s in P(a), P(b|a), P(c|a) but distinct in P(d|b,c) such that the message P 2 (b|d=d 0 ) produced from s’ is identical to that produced from s, and so is the message P 2 (c|d=d 0 ) a,ba,c b,c,d a b c A0A0 A2A2 A1A1 Figure 1 Mas3: a multiagent system of 3 agents. a bc d

Proof: Denote P 2 (b=b 0 |d=d 0 ) from state s by P 2 (b 0 |d 0 ), P 2 ’(b=b 0 |d=d 0 ) from state s’ by P 2 ’(b 0 |d 0 ). P 2 (b 0 |d 0 ) can be expanded as: For P 2 (b|d 0 )=P 2 ’(b|d 0 ), we have: Similarly, Because P 2 ’(d|b,c) has 4 independent parameters but is constrained by only two equations, it has infinitely many solutions.

 Lemma 10: Let P and P’ be strictly positive probability distributions over the DAG of Figure 1 such that they are identical in P(a), P(b|a) and P(c|a) but distinct in P(d|b,c). Then P(a|d=d 0 ) is distinct from P’(a|d=d 0 ) in general Proof: The following can be obtained from P and P’: If P(b,c|d 0 ) ≠ P’(b,c|d 0 ), then in general P(a|d 0 ) ≠P’(a|d 0 ) Because P(d|b,c) ≠P’(d|b,c), in general, it is the case that P(b,c|d 0 ) ≠P’(b,c|d 0 ). Do you agree???

Theorem 11: Message passing in Mas3 cannot be coherent in general, no matter how it is performed Proof: 1.By Lemma 9, P 2 (b|d=d 0 ) and P 2 (c|d=d 0 ) are insensitive to the initial states and hence the posteriors P 0 (a|d=d 0 ) computed from the messages can not be sensitive to the initial states either 2.However, by Lemma 10, the posterior should be different in general given different initial states Hence, correct belief updating cannot be achieved in Mas3 a,ba,c b,c,d a b c A0A0 A2A2 A1A1 Figure 1  Correct inference requires P(b,c|d 0 )  However, nondegenerate loop results in the passing of the marginals of P(b,c|d 0 ), i.e., P(b|d=d 0 ) and P(c|d=d 0 ) Insight

We can generalize this analysis to an arbitrary, strong nondegenerate loop of length 3 Further generalize this analysis to an arbitrary, strong nondegenerate loop of length K ≥ 3 Conclusion Corollary 12: Message passing in a cluster graph with nondegenerate loops cannot be coherent in general, no matter how it is performed

Another conclusion without proof: A cluster graph with only degenerate loops can always be treated by first breaking the loops at appropriate separators. The resultant is a cluster tree Therefore, we have: Proposition 13: Let a multiagent system be one that observes BC 1 through BC 3. Then a tree organization of agents should be used

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN Five Basic Commitments  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN Five Basic Commitments  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’

Proposition 17: Let a multiagent system over V be constructed following BC 1 through BC 4. Then each subdomain V i is structured as a DAG over V i and the union of these DAGs is a connected DAG over V Proof: 1. The connectedness is implied by Proposition 6 2. If the union of subdomain DAGs is not a DAG, then it has a directed loop. This contradicts the acyclic interpretation of dependence in individual DAG models

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN Five Basic Commitments  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’

Theorem 18: Let Ψ be a hypertree over a directed graph G=(V, E). For each hyperlink I which splits Ψ into 2 subtrees over U ⊂ V and W ⊂ V respectively, U \ I and W \ I are d-separated by I iff each hyperlink in Ψ is a d- sepset Proposition 14: Let a multiagent system be one that observes BC 1 through BC 3. Then a junction tree organization of agents must be used Proposition 19: Let a multiagent system be constructed following BC 1 through BC 4. Then it must be structured as a hypertree MSDAG

Proof of Proposition 19: From BC 1 through BC 4, it follows that each subdomain should be structured as a DAG and the entire domain should be structured as a connected DAG (Proposition 17). The DAGs should be organized into a hypertree (Proposition 14). The interface between adjacent DAGs on the hypertree should be a d- sepset (Theorem 18). Hence, the multiagent system should be structured as a hypertree MSDAG (Definition 3)

Seven Constraints 1. Each agent’s belief is represented by Bayesian probability 2. The domain is decomposed into subdomains with RIP 3. Subdomains are organized into a hyptertree structure 4. The dependency structure of each subdomain is represented by a DAG 5. The union of DAGs for all subdomains is a connected DAG 6. Each hyperlink is a d-sepset 7. The JPD can be expressed as in definition of MSBN Five Basic Commitments  BC1: Each agent’s belief is represented by Bayesian probability  BC2: Ai and Aj can communicate directly only with their intersecting variables  BC3: A simpler agent organization, i.e., tree, is preferred when degenerate loops exist in the CG  BC4: A DAG is used to structure each individual agent’s knowledge  BC5: Within each agent’s subdomain, the JPD is consistent with the agent’s belief. For shared nodes, the JPD supplements each agent’s knowledge with others’

Conclusion Theorem 22: Let a multiagent system be constructed following BC 1 through BC 5. Then it must be represented as a MSBN or some equivalent