Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exact Inference Continued

Similar presentations


Presentation on theme: "Exact Inference Continued"— Presentation transcript:

1 Exact Inference Continued
.

2 Variable Elimination in Graphical Models
p(t|v) V S L T A B X D p(x|a) p(d|a,b) p(a|t,l) p(b|s) p(l|s) p(s) p(v) g1(t,v) V S L T A B X D g4(x,a) g3(d,a,b) g2(a,t,l) g (l,b,s)=p(b|s)p(l,s) P(a,…,x) = P(v,t) P(a|t,l) P(d|a,b) P(x|a) P(b|s) P(l|s)P(s) P(a,…,x) = K g1(v,t) g2(a,t,l) g3(d,a,b) g4(x,a) g5(b,s) g6(l,s)

3 A Graph-Theoretic View
Eliminating vertex v from an undirected graph G – the process of making NG(v) a clique and removing v and its incident edges from G. NG(v) is the set of vertices that are adjacent to v in G. Elimination sequence of G – an order of all vertices.

4 Treewidth The width w of an elimination sequence s is the size of
the largest clique (minus 1) being formed in the elimination process, namely, ws = maximumv|NG(v)|. The treewidth tw of a graph G is the minimum width among all elimination sequences, namely, tw=minimums ws S f a c d e b Examples. All trees have tw = 1, All graphs with isolated cycles have tw = 2, cliques of size n have tw=n-1.

5 Order 1: “Corners first”
Another Example x1 x4 x2 x5 x3 x6 x7 x8 x9 Order 1: “Corners first” Largest clique size 4 Width=3 Order 2: x2, x5,… Largest clique size 5 Width=4 x1 x4 x2 x5 x3 x6 x7 x8 x9

6 Results about treewidth
Theorem. Finding an elimination sequence that produces the treewidth k, or more precisely just finding if k = c is NP-hard. Simple Greedy heuristic. At each step eliminate a vertex v that produces the smallest clique, namely, minimizes |NG(v)|.

7 Finding Good Elimination Order
Repeat until the graph becomes empty Compute for each variable |NG(v)|. Choose a vertex v among the k lowest at random (flip a coin according to |NG(v)|) 3. Eliminate vertex v from the graph (make its neighbors a clique) Repeat these steps until using 5% of the estimated time needed to solve the inference problem with the elimination order found so far (estimated by the sum of state space size of all cliques).

8 Results about treewidth
Sequence of Theorems: There are several algorithms that produce treewidth tw with a small constant factor error  at time complexity of Poly(n)ctw, where c is a constant and n is the number of vertices. Main idea. Find a vertex minimum (A,B) cutset S (error by some factor). Make S a clique. Solve recursively G[A,S] and G[B,S]. Observation. The above theorem is “practical” if the constants  and c are low enough because computing posterior belief also requires complexity of at most Poly(n)ktw where k is the size of the largest variable domain.

9 Elimination Sequence with Weights
There is a need for cost functions for optimizing time complexity that take the number of states into account. Elimination sequence of a weighted graph G – an order of the vertices of G, written as Xα= (Xα(1) ,…,Xα(n) ), where α is a permutation on {1,…,n}. The cost of eliminating vertex v from a graph Gi is the product of weights of the vertices in NGi(v).

10 Elimination Sequence with Weights
The residual graph Gi is the graph obtained from Gi-1 by eliminating vertex Xα(i-1). (G1≡G). The cost of an elimination sequence Xα is the sum of costs of eliminating Xα(i) from Gi, for all i.

11 Example Weights of vertices (#of states): yellow nodes: w = 2
Original Bayes network. V S T L A B D X Weights of vertices (#of states): yellow nodes: w = 2 blue nodes: w = 4 V S T L A B D X Undirected graph Representation, called the moral graph. I-map of the original

12 Example Suppose the elimination sequence is Xα=(V,B,S,…): G1 G2 G3 V S
D X G2 S T L A B D X G3 S T L A D X

13 Finding Good Elimination Order
Optimal elimination sequence: one with minimal cost. NP-complete. Repeat until the graph becomes empty Compute the elimination cost of each variable in the current graph. Choose a vertex v among the k lowest at random (flip a coin according to their current elimination costs). 3. Eliminate vertex v from the graph (make its neighbors a clique) Repeat these steps until using 5% of the estimated time needed to solve the inference problem with the elimination order found so far (estimated by the sum of state space size of all cliques).

14 Global conditioning Fixing value of A & B a a b b A B C D C D E E I J
K K L L M M This transformation yields an I-map of Prob(a,b,C,D…) for fixed values of A and B. Fixing values in the beginning of the summation can decrease tables formed by variable elimination. This way space is traded with time. Special case: choose to fix a set of nodes that “break all loops”. This method is called cutset-conditioning.

15 Cuteset conditioning Fixing value of A & B & L breaks all loops. We remain with solving a tree. But can we choose less variables to break all loops ? Are there better variables to choose than others ? A L C I D J B E K M This optimization question translates to the well known WVFS problem: Choose a set of variables of least weight that lie on every cycle of a given weighted undirected graph G.

16 Optimization The weight w of a node v is defined by:
L C I D J B E K M The weight w of a node v is defined by: w(v)= Log(|Dom(V)|). The problem is to minimize the sum of w(v) of all v in the selected cutset. Solution idea (Factor 2). Remove a vertex with minimum w(v)/d(v). Update neighboring weights by w(u)-w(u)/d(u). Repeat until all cycles are gone. Make the set minimal.

17 Summary Variable elimination: find an order that minimizes the width. The optimal is called treewidth. Complexity of inference grows exponentially in tw. Treewith is smallest in trees and maximal in cliques. Cutset conditioning: find an order that minimizes the cutset size/weight. Complexity of inference grows exponentially in cutset size. Cutset is smallest in trees and maximal in cliques. Example: Small loops connected in a chain. Inference is exponential using the second method but polynomial using the first method.

18 Extra Slides If time allows

19 Local Conditional Table Noisy Or-Gate Model

20 Belief Update in Poly-Trees

21 Belief Update in Poly-Trees

22 Approximate Inference
Loopy belief propagation Gibbs sampling Bounded conditioning Likelihood Weighting Variational methods

23 Loopy Belief Propagation in DAGs ( Iterate the messages as if it is a tree)
WRONG


Download ppt "Exact Inference Continued"

Similar presentations


Ads by Google