Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reasoning Under Uncertainty: Variable Elimination for Bayes Nets Jim Little Uncertainty 6 Nov 12 2014 Textbook §6.4, 6.4.1.

Similar presentations


Presentation on theme: "Reasoning Under Uncertainty: Variable Elimination for Bayes Nets Jim Little Uncertainty 6 Nov 12 2014 Textbook §6.4, 6.4.1."— Presentation transcript:

1 Reasoning Under Uncertainty: Variable Elimination for Bayes Nets Jim Little Uncertainty 6 Nov 12 2014 Textbook §6.4, 6.4.1

2 Announcements (1) Assignment 3 due Friday Final exam Friday, Dec 5, 330pm-600pm in PHRM 1101 Same general format as midterm (~60% short questions) List of short questions will be on Connect Practice final to be provided More emphasis on material after midterm How to study? Practice exercises, assignments, short questions, lecture notes, text, problems in text, … Use TA and my office hours (extra office hours TBA if needed) Review sessions: last class plus more TBA if needed Submit topics you want reviewed on Piazza 2

3 Realistic BNet: Liver Diagnosis Source: Onisko et al., 1999 ~ 60 nodes, max 4 parents per node Need ~ 60 x 2 4 = 15 x 2 6 probabilities instead of 2 60 probabilities for the JPD 3

4 Inference in Bayesian Networks Given: A Bayesian Network BN, and Observations of a subset of its variables E: E=e A subset of its variables Y that is queried Compute: The conditional probability P(Y|E=e) We denote observed variables as shaded. Examples: UnderstoodMaterial Assignment Grade Exam Grade Alarm Smoking At Sensor Fire 4

5 Bayesian Networks: Types of Inference Diagnostic People are leaving P(L=t)=1 P(F|L=t)=? PredictiveIntercausal Mixed Fire happens P(F=t)=1 P(L|F=t)=? Alarm goes off P(A) = 1 P(F|A=t,T=t)=? People are leaving P(L=t+=1 There is no fire F=f P(A|F=f,L=t)=? Person smokes next to sensor S=t Fire Alarm Leaving Fire Alarm Leaving Fire Alarm Leaving Fire Alarm Smoking at Sensor We will use the same reasoning procedure for all of these types 5

6 Example 6

7 Suppose that, after observing people leaving, we then observe smoke 7

8 Bayesian Networks - AISpace Try similar queries with the Belief Networks applet in AISpace, Load the Fire Alarm example (ex. 6.10 in textbook) Compare the probability of each query node before and after the new evidence is observed try first to predict how they should change using your understanding of the domain Make sure you explore and understand the various other example networks in textbook and Aispace Electrical Circuit example (textbook ex 6.11) Patient’s wheezing and coughing example (ex. 6.14) Not in applet, but you can easily insert it Several other examples in AIspace 8

9 How to do Inference in Bayesian Networks Given: A Bayesian Network BN, and Observations of a subset of its variables E: E=e A subset of its variables Y that is queried Compute: The conditional probability P(Y|E=e) Remember? We can already do this Compute any entry of the JPD given the CPTs in the network, then do Inference by Enumeration BUT that’s extremely inefficient - does not scale. Variable Elimination (VE) is an algorithm to perform inference in Bayesian networks more efficiently It manipulates conditional probabilities in the form of “factors”. So first we have to introduce factors and the operations we can perform on them. 9

10 Inference All we need to compute is the numerator: joint probability of the query variable(s) and the evidence! Variable Elimination is an algorithm that efficiently performs this operation by casting it as operations between factors - introduced next We need to compute this numerator for each value of Y, y i We need to marginalize over all the variables Z 1,…Z k not involved in the query To compute the denominator, marginalize over Y - Same value for every P(Y=y i ). Normalization constant ensuring that Def of conditional probability Y: subset of variables that is queried E: subset of variables that are observed. E = e Z 1, …,Z k remaining variables in the JPD 10

11 Lecture Overview Recap lecture 30 Defining Conditional Probabilities in the network Considerations on Network Structure Inference Factors Variable Elimination 11

12 XYZval ttt0.1 ttf0.9 tft0.2 tff0.8 ftt0.4 ftf0.6 fft0.3 fff0.7 Factors A factor is a function from a tuple of random variables to the real numbers R We write a factor on variables X 1,…,X j as f(X 1,…,X j ) A factor denotes one or more (possibly partial) distributions over the given tuple of variables, e.g., P(X 1, X 2 ) is a factor f(X 1, X 2 ) P(Z | X,Y) is a factor f(Z,X,Y) P(Z=f|X,Y) is a factor f(X,Y) Note: Factors do not have to sum to one Distribution Set of Distributions One for each combination of values for X and Y Set of partial Distributions f(X, Y ) Z = f 12

13 Operation 1: assigning a variable We can make new factors out of an existing factor Our first operation: we can assign some or all of the variables of a factor. XYZval ttt0.1 ttf0.9 tft0.2 f(X,Y,Z):tff0.8 ftt0.4 ftf0.6 fft0.3 fff0.7 What is the result of assigning X= t ? f(X=t,Y,Z) =f(X, Y, Z) X = t YZval tt0.1 tf0.9 ft0.2 ff0.8 Factor of Y,Z 13

14 More examples of assignment XYZval ttt0.1 ttf0.9 tft0.2 f(X,Y,Z):tff0.8 ftt0.4 ftf0.6 fft0.3 fff0.7 YZval tt0.1 tf0.9 ft0.2 ff0.8 Yval t0.9 f0.8 f(X=t,Y=f,Z=f):0.8 f(X=t,Y,Z) f(X=t,Y,Z=f): Factor of Y,Z Factor of Y Number 14

15 Recap If we assign variable A=a in factor f (A,B), what is the correct form for the resulting factor? 15

16 Recap If we assign variable A=a in factor f (A,B), what is the correct form for the resulting factor? f(B). When we assign variable A we remove it from the factor’s domain 16

17 Operation 2: Summing out a variable Our second operation on factors: we can marginalize out (or sum out) a variable Exactly as before. Only difference: factors don’t sum to 1 Marginalizing out a variable X from a factor f(X 1,…,X n ) yields a new factor defined on {X 1,…,X n } \ {X} BACval ttt0.03 ttf0.07 ftt0.54 ftf0.36 f3=f3=tft0.06 tff0.14 fft0.48 fff0.32 ACval tt0.57 tf0.43 ft0.54 ff0.46 (  B f 3 )(A,C) 17

18 Operation 2: Summing out a variable Our second operation on factors: we can marginalize out (or sum out) a variable Exactly as before. Only difference: factors don’t sum to 1 Marginalizing out a variable X from a factor f(X 1,…,X n ) yields a new factor defined on {X 1,…,X n } \ {X} BACval ttt0.03 ttf0.07 ftt0.54 ftf0.36 f3=f3=tft0.06 tff0.14 fft0.48 fff0.32 ACval tt0.57 tf0.43 ft0.54 ff0.46 (  B f 3 )(A,C) 18

19 Recap If we assign variable A=a in factor f (A,B), what is the correct form for the resulting factor? f(B). When we assign variable A we remove it from the factor’s domain If we marginalize variable A out from factor f (A,B), what is the correct form for the resulting factor? 19

20 Recap If we assign variable A=a in factor f (A,B), what is the correct form for the resulting factor? f(B). When we assign variable A we remove it from the factor’s domain If we marginalize variable A out from factor f (A,B), what is the correct form for the resulting factor? f(B). When we marginalize out variable A we remove it from the factor’s domain 20

21 Operation 3: multiplying factors ABCval ttt 0.03 ttf 0.1x0.7 tft 0.9x0.6 tff… ftt ftf fft fff ABVal tt0.1 f 1 (A,B):tf0.9 ft0.2 ff0.8 BCVal tt0.3 f 2 (B,C):tf0.7 ft0.6 ff0.4 f 1 (A,B)× f 2 (B,C): 21

22 Operation 3: multiplying factors The product of factor f 1 (A, B) and f 2 (B, C), where B is the variable in common, is the factor (f 1 × f 2 )(A, B, C) defined by: Note: A, B and C can be sets of variables - the domain of f1 × f2 is 22

23 Recap If we assign variable A=a in factor f (A,B), what is the correct form for the resulting factor? f(B). When we assign variable A we remove it from the factor’s domain If we marginalize variable A out from factor f (A,B), what is the correct form for the resulting factor? f(B). When we marginalize out variable A we remove it from the factor’s domain If we multiply factors f 4 (X,Y) and f 6 (Z,Y), what is the correct form for the resulting factor? f(X,Y,Z) When multiplying factors, the resulting factor’s domain is the union of the multiplicands’ domains 23

24 Recap If we assign variable A=a in factor f 7 (A,B), what is the correct form for the resulting factor? If we marginalize variable A out from factor f 7 (A,B), what is the correct form for the resulting factor? If we multiply factors f 4 (X,Y) and f 6 (Z,Y), what is the correct form for the resulting factor? f(B) f(X,Y,Z) f(B) 24

25 Recap: Factors and Operations on Them If we assign variable A=a in factor f 7 (A,B), what is the correct form for the resulting factor? f(B). When we assign variable A we remove it from the factor’s domain If we marginalize variable A out from factor f 7 (A,B), what is the correct form for the resulting factor? f(B). When we marginalize out variable A we remove it from the factor’s domain If we multiply factors f 4 (X,Y) and f 6 (Z,Y), what is the correct form for the resulting factor? f(X,Y,Z) When multiplying factors, the resulting factor’s domain is the union of the multiplicands’ domains What is the correct form for  B f 5 (A,B) × f 6 (B,C) As usual, product before sum:  B ( f 5 (A,B) × f 6 (B,C) ) Result of multiplication: f 6 (A,B,C). Then marginalize out B: f 7 (A,C) 25

26 Lecture Overview Recap lecture 30 Defining Conditional Probabilities in the network Considerations on Network Structure Inference Factors Variable Elimination 26

27 Remember our goal All we need to compute is the numerator: joint probability of the query variable(s) and the evidence! Variable Elimination is an algorithm that efficiently performs this operation by casting it as operations between factors We need to compute this numerator for each value of Y, y i We need to marginalize over all the variables Z 1,…Z k not involved in the query To compute the denominator, marginalize over Y - Same value for every P(Y=y i ). Normalization constant ensuring that Def of conditional probability Y: subset of variables that is queried E: subset of variables that are observed. E = e Z 1, …,Z k remaining variables in the JPD 27

28 Variable Elimination: Intro (1) We can express the joint probability as a factor f(Y, E 1 …, E j, Z 1 …,Z k ) We can compute P(Y, E 1 =e 1, …, E j =e j ) by Assigning E 1 =e 1, …, E j =e j Marginalizing out variables Z 1, …, Z k, one at a time the order in which we do this is called our elimination ordering Are we done? observedOther variables not involved in the query No, this still represents the whole JPD (as a single factor)! Need to exploit the compactness of Bayesian networks 28

29 Variable Elimination Intro (2) We can express the joint factor as a product of factors, one for each conditional probability Recall the JPD of a Bayesian network 29

30 Computing sums of products Inference in Bayesian networks thus reduces to computing the sums of products To compute efficiently Factor out those terms that don't involve Z k, e.g.: 30

31 Summing out a variable efficiently To sum out a variable Z from a product f 1 × … × f k of factors Partition the factors into Those that do not contain Z, say f 1,.., f i Those that contain Z, say f i+1,…, f k Rewrite New factor f’ obtained by multiplying f i+1,..,f k and then summing out Z We thus have Now we have summed out Z 31

32 Simplify Sum or Product: General Case Factors that contain Z 1 Factors that do not contain Z 1 Factors that contain Z 2 Etc., continue given a predefined simplification ordering of the variables: variable elimination ordering Factors that do not contain Z 2 32

33 Analogy with “Computing sums of products” This simplification is similar to what you can do in basic algebra with multiplication and addition Example: it takes 14 multiplications or additions to evaluate the expression ab + ac + ad + aeh + afh + agh. How can this expression be evaluated efficiently? Factor out the a and then the h giving a(b + c + d + h(e + f + g)) This takes only 7 operations 33

34 The variable elimination algorithm, 1.Construct a factor for each conditional probability. 2.For each factor, assign the observed variables E to their observed values. 3.Given an elimination ordering, decompose sum of products 4.Sum out all variables Z i not involved in the query 5.Multiply the remaining factors (which only involve ) To compute P(Y=y i | E = e) See the algorithm VE_BN in the P&M text, Section 6.4.1, Figure 6.8, p. 254. 6. Normalize by dividing the resulting factor f(Y) by 34

35 Lecture Overview Recap lecture 32 Factors and their operations Variable Elimination (VE) Algorithm VE example 35

36 Variable elimination example P(G,H) =  A,B,C,D,E,F,I P(A,B,C,D,E,F,G,H,I) = =  A,B,C,D,E,F,I P(A)P(B|A)P(C)P(D|B,C)P(E|C)P(F|D)P(G|F,E)P(H|G)P(I|G) Compute P(G | H=h 1 ). 36

37 Step 1: Construct a factor for each cond. probability P(G,H) =  A,B,C,D,E,F,I P(A)P(B|A)P(C)P(D|B,C)P(E|C)P(F|D)P(G|F,E)P(H|G)P(I|G) P(G,H) =  A,B,C,D,E,F,I f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) Compute P(G | H=h 1 ). 37

38 Previous state: P(G,H) =  A,B,C,D,E,F,I f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) Observe H : f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) Step 2: assign to observed variables their observed values. P(G,H=h 1 )=  A,B,C,D,E,F,I f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 9 (G) f 8 (I,G) Compute P(G | H=h 1 ). H=h 1 38

39 Step 3: Decompose sum of products Previous state: P(G,H=h 1 ) =  A,B,C,D,E,F,I f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 9 (G) f 8 (I,G) Elimination ordering A, C, E, I, B, D, F : P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B  I f 8 (I,G)  E f 6 (G,F,E)  C f 2 (C) f 3 (D,B,C) f 4 (E,C)  A f 0 (A) f 1 (B,A) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) Compute P(G | H=h 1 ). 39

40 Step 4: sum out non query variables (one at a time) Previous state: P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B  I f 8 (I,G)  E f 6 (G,F,E)  C f 2 (C) f 3 (D,B,C) f 4 (E,C)  A f 0 (A) f 1 (B,A) Eliminate A: perform product and sum out A in P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B f 10 (B)  I f 8 (I,G)  E f 6 (G,F,E)  C f 2 (C) f 3 (D,B,C) f 4 (E,C) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Elimination order: A,C,E,I,B,D,F Compute P(G | H=h 1 ). f 10 (B) does not depend on C, E, or I, so we can push it outside of those sums. 40

41 Step 4: sum out non query variables (one at a time) Previous state: P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B f 10 (B)  I f 8 (I,G)  E f 6 (G,F,E)  C f 2 (C) f 3 (D,B,C) f 4 (E,C) Eliminate C: perform product and sum out C in P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B f 10 (B)  I f 8 (I,G)  E f 6 (G,F,E) f 11 (B,D,E) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Compute P(G | H=h 1 ). Elimination order: A,C,E,I,B,D,F f 11 (B,D,E) 41

42 Step 4: sum out non query variables (one at a time) Previous state: P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B f 10 (B)  I f 8 (I,G)  E f 6 (G,F,E) f 11 (B,D,E) Eliminate E: perform product and sum out E in P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B f 10 (B) f 12 (B,D,F,G)  I f 8 (I,G) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Compute P(G | H=h 1 ). Elimination order: A,C,E,I,B,D,F f 11 (B,D,E) f 12 (B,D,F,G) 42

43 Previous state: P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G)  F  D f 5 (F, D)  B f 10 (B) f 12 (B,D,F,G)  I f 8 (I,G) Eliminate I: perform product and sum out I in P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G) f 13 (G)  F  D f 5 (F, D)  B f 10 (B) f 12 (B,D,F,G) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Elimination order: A,C,E,I,B,D,F f 11 (B,D,E) f 12 (B,D,F,G) f 13 (G) Step 4: sum out non query variables (one at a time) Compute P(G | H=h 1 ). 43

44 Previous state: P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G) f 13 (G)  F  D f 5 (F, D)  B f 10 (B) f 12 (B,D,F,G) Eliminate B: perform product and sum out B in P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G) f 13 (G)  F  D f 5 (F, D) f 14 (D,F,G) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Elimination order: A,C,E,I,B,D,F f 11 (B,D,E) f 12 (B,D,F,G) f 13 (G) f 14 (D,F,G) Step 4: sum out non query variables (one at a time) Compute P(G | H=h 1 ). 44

45 Previous state: P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G) f 13 (G)  F  D f 5 (F, D) f 14 (D,F,G) Eliminate D: perform product and sum out D in P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G) f 13 (G)  F f 15 (F,G) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Elimination order: A,C,E,I,B,D,F f 11 (B,D,E) f 12 (B,D,F,G) f 13 (G) f 14 (D,F,G) f 15 (F,G) Step 4: sum out non query variables (one at a time) Compute P(G | H=h 1 ). 45

46 Previous state: P(G,H=h 1 ) = P(G,H=h 1 ) = f 9 (G) f 13 (G)  F f 15 (F,G) Eliminate F: perform product and sum out F in f 9 (G) f 13 (G)f 16 (G) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Elimination order: A,C,E,I,B,D,F f 11 (B,D,E) f 12 (B,D,F,G) f 13 (G) f 14 (D,F,G) f 15 (F,G) f 16 (G) Step 4: sum out non query variables (one at a time) Compute P(G | H=h 1 ). 46

47 Step 5: Multiply remaining factors Previous state: P(G,H=h 1 ) = f 9 (G) f 13 (G)f 16 (G) Multiply remaining factors (all in G): P(G,H=h 1 ) = f 17 (G) f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Compute P(G | H=h 1 ). Elimination order: A,C,E,I,B,D,F f 11 (B,D,E) f 12 (B,D,F,G) f 13 (G) f 14 (D,F,G) f 15 (F,G) f 16 (G) f 17 (G) 47

48 Step 6: Normalize f 9 (G) f 0 (A) f 1 (B,A) f 2 (C) f 3 (D,B,C) f 4 (E,C) f 5 (F, D) f 6 (G,F,E) f 7 (H,G) f 8 (I,G) f 10 (B) Compute P(G | H=h 1 ). f 11 (B,D,E) f 12 (B,D,F,G) f 13 (G) f 14 (D,F,G) f 15 (F,G) f 16 (G) f 17 (G) 48

49 VE and conditional independence So far, we haven’t use conditional independence! Before running VE, we can prune all variables Z that are conditionally independent of the query Y given evidence E: Z ╨ Y | E They cannot change the belief over Y given E! Example: which variables can we prune for the query P(G=g| C=c 1, F=f 1, H=h 1 ) ? 49

50 Variable elimination: pruning Slide 50  Thus, if the query is P(G=g| C=c 1, F=f 1, H=h 1 ) we only need to consider this subnetwork We can also prune unobserved leaf nodes Since they are unobserved and not predecessors of the query nodes, they cannot influence the posterior probability of the query nodes 50

51 One last trick We can also prune unobserved leaf nodes And we can do so recursively E.g., which nodes can we prune if the query is P(A)? Recursively prune unobserved leaf nodes: we can prune all nodes other than A ! 51

52 In 1, 2 and 3, X and Y are dependent (grey areas represent existing evidence/observations) Dependencies in a Bayesian Network Z Z Z XY E E E 1 2 3 In 3, X and Y become dependent as soon as there is evidence on Z or on any of its descendants. This is because knowledge of one possible cause given evidence of the effect explains away the other cause

53 Or, blocking paths for probability propagation. Three ways in which a path between Y to X (or viceversa) can be blocked, given evidence E Or Conditional Independencies Z Z Z XYE 1 2 3 In 3, X and Y are independent if there is no evidence on their common effect (recall fire and tampering in the alarm example

54 Conditional Independece in a Bnet Z Z Z XYE 1 2 3 A. True B. False

55 (In)Dependencies in a Bnet Z Z Z XYE 1 2 3 A. True B. False

56 (In)Dependencies in a Bnet Z Z Z XYE 1 2 3 A. True B. False

57 (In)Dependencies in a Bnet Is A conditionally independent of I given F? Z Z Z XYE 1 2 3 T: lack of evidence on D blocks the path that goes through E, D, B,

58 VE in AISpace To see how variable elimination works in the Aispace Applet Select “Network options -> Query Models > verbose” Compare what happens when you select “Prune Irrelevant variables” or not in the VE window that pops up when you query a node Try different heuristics for elimination ordering 58

59 Complexity of Variable Elimination (VE) (not required) A factor over n binary variables has to store 2 n numbers The initial factors are typically quite small (variables typically only have few parents in Bayesian networks) But variable elimination constructs larger factors by multiplying factors together The complexity of VE is exponential in the maximum number of variables in any factor during its execution This number is called the treewidth of a graph (along an ordering) Elimination ordering influences treewidth Finding the best ordering is NP complete I.e., the ordering that generates the minimum treewidth Heuristics work well in practice (e.g. least connected variables first) Even with best ordering, inference is sometimes infeasible In those cases, we need approximate inference. See CS422 & CS540 59

60 Bioinformatics Big picture: Reasoning Under Uncertainty Dynamic Bayesian Networks Hidden Markov Models & Filtering Probability Theory Bayesian Networks & Variable Elimination Natural Language Processing Email spam filters Motion Tracking, Missile Tracking, etc Monitoring (e.g. credit card fraud detection) Diagnostic systems (e.g. medicine) 60

61 Where are we? Environment Problem Type Query Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Logics STRIPS Vars + Constraints Variable Elimination Belief Nets Decision Nets Static Sequential Representation Reasoning Technique Variable Elimination This concludes the module on answering queries in stochastic environments 61

62 Build a Bayesian Network for a given domain Identify the necessary CPTs Compare different network structures Understand dependencies and independencies Variable elimination Understating factors and their operations Carry out variable elimination by using factors and the related operations Use techniques to simplify variable elimination Learning Goals For Bnets 62


Download ppt "Reasoning Under Uncertainty: Variable Elimination for Bayes Nets Jim Little Uncertainty 6 Nov 12 2014 Textbook §6.4, 6.4.1."

Similar presentations


Ads by Google