Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CMSC 671 Fall 2001 Class #21 – Tuesday, November 13.

Similar presentations


Presentation on theme: "1 CMSC 671 Fall 2001 Class #21 – Tuesday, November 13."— Presentation transcript:

1 1 CMSC 671 Fall 2001 Class #21 – Tuesday, November 13

2 2 Today’s class Conditional independence Bayesian networks –Network structure –Conditional probability tables –Conditional independence –Inference in Bayesian networks

3 3 Inference in Bayesian Networks Chapter 14 (2/e)

4 4 Approaches to inference Exact inference –Enumeration –Belief propagation in polytrees –Variable elimination –Clustering / join tree algorithms Approximate inference –Stochastic simulation / sampling methods –Markov chain Monte Carlo methods –Genetic algorithms –Neural networks –Simulated annealing –Mean field theory

5 5 Exact inference: Reminder P(x i ) = Σ πi P(x i | π i ) P(π i ) Suppose we want P(D=true), and only the value of E is given as true P (d|e) =  Σ ABC P(a, b, c, d, e) =  Σ ABC P(a) P(b|a) P(c|a) P(d|b,c) P(e|c) With simple iteration to compute this expression, there’s going to be a lot of repetition (e.g., P(e|c) has to be recomputed every time we iterate over C=true) a b c d e

6 6 Efficient inference in polytrees Polytree: at most one undirected path between any two nodes in the network (i.e., no loops) Query variable X Evidence variables E –Causal support comes from “ancestor” variables, E + –Evidential support comes from “descendant” variables, E - P(X|E) = P(X|E +,E - ) = P(E - |X, E + ) P(X|E + ) / P(E - | E + ) =  P(E - |X) P(X|E + ) Causal support Evidential support 

7 7 Polytree Causal support (from ancestors and their other descendants) Evidential support (from descendants and their other ancestors)

8 8 Computing causal support P(X|E) = P(X|E +,E-) = P(E-|X, E + ) P(X|E + ) / P(E- | E + ) =  P(E-|X) P(X|E + ) P(X|E + ) = Σ p = π(X) P(X|p,E + ) P(p|E + ) = Σ p = π(X) P(X|p)  i P(p i |E + i ) When you recurse, exclude X, so each node is visited once X is conditionally independent of E + given X’s parents; this is just a CPT lookup term p i is the i th parent; E + i is the set of evidence variables that are “connected” to that parent; recurse to compute this, unless p i is in E + i (then we’re done)

9 9 Computing evidential support P(X|E) = P(X|E +,E - ) = P(X, E + ) P(X|E + ) / P(E - |E + ) =  P(E - |X) P(X|E + ) Evidential support is trickier than causal support P(E - |X) =  C  Children(X) P(E - C |X) =  C  Children(X) Σ C Σ p=π(C) P(E - C |X, c, p) P(c, p|X) =  C Σ C Σ p P(E - (E - C ) |X, c, p) P(E + (E - C ) |X, c, p) P(c, p|X) =  C Σ C P(E - (E - C ) | c) Σ p P(E + (E - C ) | p) P(c, p|X) product of evidence propagated via each child of X sum out (marginalize) over values of child and its parents separate into causal and evidential support for the child apply independence to reduce terms

10 10 Computing evidential support II  C Σ C P(E - (E - C ) | c) Σ p P(E + (E - C ) | p) P(c, p|X) =  C Σ C P(E - (E - C ) | c) Σ p P(p|E + (E- C )) P(E + (E - C ))/P(p) P(c|X,p) P(p|X) =   C Σ C P(E - (E - C ) | c) Σ p P(p|E + (E - C )) P(c|X,p) =   C Σ C P(E - (E - C ) | c) Σ p P(c|X,p)  i P(p i |E + (E - C )) apply Bayes’ rule to the first term and the product rule to the 2nd P(p)=P(p|X), pull out normalizing constant (P(Ec(Ee C )) Parents p i are conditionally independent given evidence; compute this term recursively (causal support for the child’s parents) This term is just a CPT lookup Compute this recursively (evidential support for the child)

11 11 Exact inference with non-polytrees Conditioning: Find the network’s smallest cutset S (a set of nodes whose removal renders the network singly connected) –In this network, S = {A} or {B} or {C} or {D} For each instantiation of S, compute the belief update with the polytree algorithm Combine the results from all instantiations of S Computationally expensive (finding the smallest cutset is in general NP- hard, and the total number of possible instantiations of S is O(2 }S| )) a b c d e

12 12 Stochastic simulation Suppose you are given values for some subset of the variables, G, and want to infer values for unknown variables, U Randomly generate a very large number of instantiations from the BN –Generate instantiations for all variables – start at root variables and work your way “forward” Only keep those instantiations that are consistent with the values for G Use the frequency of values for U to get estimated probabilities Accuracy of the results depends on the size of the sample (asymptotically approaches exact results)

13 13 Markov chain Monte Carlo methods So called because –Markov chain – each instance generated in the sample is dependent on the previous instance –Monte Carlo – statistical sampling method Perform a random walk through variable assignment space, collecting statistics as you go –Start with a random instantiation, consistent with evidence variables –At each step, for some nonevidence variable, randomly sample its value, consistent with the other current assignments Given enough samples, MCMC gives an accurate estimate of the true distribution of values

14 14 Evidential reasoning: Dempster-Shafer


Download ppt "1 CMSC 671 Fall 2001 Class #21 – Tuesday, November 13."

Similar presentations


Ads by Google