Exact Inference Continued

Slides:



Advertisements
Similar presentations
Lecture 15. Graph Algorithms
Advertisements

Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
. Exact Inference in Bayesian Networks Lecture 9.
Max Cut Problem Daniel Natapov.
Lauritzen-Spiegelhalter Algorithm
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Graph Isomorphism Algorithms and networks. Graph Isomorphism 2 Today Graph isomorphism: definition Complexity: isomorphism completeness The refinement.
Exact Inference in Bayes Nets
Label Placement and graph drawing Imo Lieberwerth.
Junction Trees And Belief Propagation. Junction Trees: Motivation What if we want to compute all marginals, not just one? Doing variable elimination for.
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Dynamic Bayesian Networks (DBNs)
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
An Introduction to Variational Methods for Graphical Models.
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Optimization of Pearl’s Method of Conditioning and Greedy-Like Approximation Algorithm for the Vertex Feedback Set Problem Authors: Ann Becker and Dan.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Graph Algorithms: Minimum Spanning Tree We are given a weighted, undirected graph G = (V, E), with weight function w:
Recent Development on Elimination Ordering Group 1.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
. Bayesian Networks For Genetic Linkage Analysis Lecture #7.
Vertex Cover, Dominating set, Clique, Independent set
Bayesian Networks Clique tree algorithm Presented by Sergey Vichik.
Belief Propagation, Junction Trees, and Factor Graphs
. Approximate Inference Slides by Nir Friedman. When can we hope to approximate? Two situations: u Highly stochastic distributions “Far” evidence is discarded.
Steiner trees Algorithms and Networks. Steiner Trees2 Today Steiner trees: what and why? NP-completeness Approximation algorithms Preprocessing.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
APPROXIMATION ALGORITHMS VERTEX COVER – MAX CUT PROBLEMS
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
COSC 2007 Data Structures II Chapter 14 Graphs III.
1 Variable Elimination Graphical Models – Carlos Guestrin Carnegie Mellon University October 11 th, 2006 Readings: K&F: 8.1, 8.2, 8.3,
Daphne Koller Variable Elimination Graph-Based Perspective Probabilistic Graphical Models Inference.
Data Structures & Algorithms Graphs
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Probabilistic Graphical Models seminar 15/16 ( ) Haim Kaplan Tel Aviv University.
Computing Branchwidth via Efficient Triangulations and Blocks Authors: F.V. Fomin, F. Mazoit, I. Todinca Presented by: Elif Kolotoglu, ISE, Texas A&M University.
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Bayesian Optimization Algorithm, Decision Graphs, and Occam’s Razor Martin Pelikan, David E. Goldberg, and Kumara Sastry IlliGAL Report No May.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
1 Variable Elimination Graphical Models – Carlos Guestrin Carnegie Mellon University October 15 th, 2008 Readings: K&F: 8.1, 8.2, 8.3,
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Theory of Computational Complexity Yusuke FURUKAWA Iwama Ito lab M1.
More NP-Complete and NP-hard Problems
Inference in Bayesian Networks
Today.
Minimum Spanning Tree 8/7/2018 4:26 AM
Algorithms and networks
Exact Inference ..
Vertex Cover, Dominating set, Clique, Independent set
Computability and Complexity
Markov Networks.
Enumerating Distances Using Spanners of Bounded Degree
CSCI 5822 Probabilistic Models of Human and Machine Learning
REDUCESEARCH Polynomial Kernels for Hitting Forbidden Minors under Structural Parameterizations Bart M. P. Jansen Astrid Pieterse ESA 2018 August.
Exact Inference ..
Exact Inference Continued
Expectation-Maximization & Belief Propagation
Readings: K&F: 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7 Markov networks, Factor graphs, and an unified view Start approximate inference If we are lucky… Graphical.
Inference III: Approximate Inference
EE5900 Advanced Embedded System For Smart Infrastructure
Ch09 _2 Approximation algorithm
Variable Elimination Graphical Models – Carlos Guestrin
Presentation transcript:

Exact Inference Continued .

A Graph-Theoretic View Eliminating vertex v from an undirected graph G – the process of making NG(v) a clique and removing v and its incident edges from G. NG(v) is the set of vertices that are adjacent to v in G. Elimination sequence of G – an order of all vertices.

Treewidth The width w of an elimination sequence s is the size of the largest clique (minus 1) being formed in the elimination process, namely, ws = maximumv|NG(v)|. The treewidth tw of a graph G is the minimum width among all elimination sequences, namely, tw=minimums ws Examples. All trees have tw = 1, All graphs with isolated cycles have tw = 2, cliques of size n have tw=n-1.

Another Example Order 1: “Corners first” Largest clique size 4 Width=3 Order 2: x2, x5,… Largest clique size 5 Width=4 x1 x4 x2 x5 x3 x6 x7 x8 x9 Exercise: Build clique trees.

Observations Theorem. Computing a posteriori probability in a Markov graph G (or a BN for which G is the moral graph) has complexity |V|· kw where w is the width of the elimination sequence used and k is the largest domain size. Theorem. Computing a posteriori probability in chordal graphs is polynomial in the size of the input (namely, the largest clique). Justification: Chordal graphs have tw equal to the size of their largest clique (minus 1).

Observations Theorem. Finding an elimination sequence that produces the treewidth or more precisely just finding if tw = c is NP-hard. Simple heuristic. At each step eliminate a vertex v that produces the smallest clique, namely, minimizes |NG(v)|.

Results about treewidth Theorem(s). There are several algorithms that produce treewidth tw with a small constant factor error  at time complexity of Poly(n)ctw. where c is a constant and n is the number of vertices. Main idea. Find a vertex minimum (A,B) cutset S (error by some factor). Make S a clique. Solve recursively G[A,S] and G[B,S], namely, make them chordal. The union graph is chordal. Observation. The above theorem is “practical” if the constants  and c are low enough because computing posterior belief also requires complexity of at most Poly(n)ktw where k is the size of the largest variable domain.

Elimination Sequence with Weights There is a need for cost functions for optimizing time complexity that take the number of states into account. Elimination sequence of a weighted graph G – an order of the vertices of G, written as Xα= (Xα(1) ,…,Xα(n) ), where α is a permutation on {1,…,n}. The cost of eliminating vertex v from a graph Gi is the product of weights of the vertices in NGi(v).

Elimination Sequence with Weights The residual graph Gi is the graph obtained from Gi-1 by eliminating vertex Xα(i-1). (G1≡G). The cost of an elimination sequence Xα is the sum of costs of eliminating Xα(i) from Gi, for all i.

Example Weights of vertices (#of states): yellow nodes: w = 2 Original Bayes network. V S T L A B D X Weights of vertices (#of states): yellow nodes: w = 2 blue nodes: w = 4 V S T L A B D X Undirected graph Representation, called the moral graph. I-map of the original

Example Suppose the elimination sequence is Xα=(V,B,S,…): G1 G2 G3 V S D X G2 S T L A B D X G3 S T L A D X

Finding Good Elimination Order Optimal elimination sequence: one with minimal cost. NP-complete. Repeat until the graph becomes empty Compute the elimination cost of each variable in the current graph. Choose a vertex v among the k lowest “at random” (flip a coin according to their current elimination costs). 3. Eliminate vertex v from the graph (make its neighbors a clique) Repeat these steps until using 5% of the estimated time needed to solve the inference problem with the elimination order found so far (estimated by the sum of state space size of all cliques).

Global conditioning Fixing value of A & B a a b b A B C D C D E E I J K K L L M M This transformation yields an I-map of Prob(a,b,C,D…) for fixed values of A and B. Fixing values in the beginning of the summation can decrease tables formed by variable elimination. This way space is traded with time. Special case: choose to fix a set of nodes that “break all loops”. This method is called cutset-conditioning.

Cuteset conditioning Fixing value of A & B & L breaks all loops. We remain with solving a tree. But can we choose less variables to break all loops ? Are there better variables to choose than others ? A L C I D J B E K M This optimization question translates to the well known WVFS problem: Choose a set of variables of least weight that lie on every cycle of a given weighted undirected graph G.

Optimization The weight w of a node v is defined by: L C I D J B E K M The weight w of a node v is defined by: w(v)= Log(|Dom(V)|). The problem is to minimize the sum of w(v) of all v in the selected cutset. Solution idea (Factor 2). Remove a vertex with minimum w(v)/d(v). Update neighboring weights by w(u)-w(u)/d(u). Repeat until all cycles are gone. Make the set minimal.

Summary Variable elimination: find an order that minimizes the width. The optimal is called treewidth. Complexity of inference grows exponentially in tw. Treewith is smallest in trees and maximal in cliques. Cutset conditioning: find an order that minimizes the cutset size/weight. Complexity of inference grows exponentially in cutset size. Cutset is smallest in trees and maximal in cliques. Example: Small loops connected in a chain. Inference is exponential using the second method but polynomial using the first method.

The Loop Cutset Problem Each vertex that is not a sink with respect to a loop Γ is called an allowed vertex with respect to Γ. A loop cutset of a directed graph D is a set of vertices that contain at least one allowed vertex with respect to each loop in D. A minimum loop cutset of a weighted directed graph D is one for which the weight is minimum. A B C I D E K J L Example: L is a sink with respect to the loop with I and J; and L is an allowed vertex with respect to loop JLMJ. M

Reduction from LC to WVFS Given a weighted directed graph (D,w), produce the weighted undirected graph Ds as follows: Split each vertex v in D into two vertices vin and vout in Ds , connect vin and vout . All incoming edges to v become undirected incident edges with vin . All outcoming edges from v become undirected incident edges with vout . Set ws(vin)=∞ and ws(vout)=w(v). C1 W(v) Γ1 W(v) ∞ V Vin Vout ∞ W(v) V Vin Vout Γ2 C2

Algorithm LoopCutset Algorithm LC Input: A Bayesian network D. Output: A loop cutset of D. Construct the graph Ds with weight function ws . Find a vertex feedback set F for (Ds , ws). Output Ψ(F). Ψ(X) is a set obtained by replacing each vertex vin or vout in X by the respective source vertex v in D One-to-one and onto correspondence between loops in D and cycles in Ds 2-approximation for WVFS yields 2-approximation for LC

Approximate Inference Loopy belief propagation Gibbs sampling Bounded conditioning Likelihood Weighting Variational methods

Extra Slides If time allows

The Noisy Or-Gate Model

Belief Update in Poly-Trees

Belief Update in Poly-Trees