Download presentation
Presentation is loading. Please wait.
1
Quizzz Rihanna’s car engine does not start (E).
The battery could be dead (B), or the gas tank could be empty (G). If the battery is empty, that might also cause the lamp to not work (L). The reason why the battery could be empty is a broken alternator (A) or just a very old battery (O). Which minimal graphical model, best describes this situation? In which models is E cond. ind. of O, given B? In which models is G cond. ind. of B, given L? In model I is O cond. ind. of G, given B and E? ANSWER: II is the minimal Bayes Net that describes this situation. III could also be used (but wouyld be inefficient. IV, I do not have dependencies that are stated in the problem.
2
CSE 511a: Artificial Intelligence Spring 2013
Lecture 16: Bayes’ Nets III – Inference 03/27/2013 Robert Pless via Kilian Q. Weinberger Several slides adapted from Dan Klein – UC Berkeley
3
Announcements Project 3 due, midnight, Monday.
No late days can be used for bonus points.
4
Inference Inference: calculating some useful quantity from a joint probability distribution Examples: Posterior probability: Most likely explanation: B E A J M
5
Inference by Enumeration
Given unlimited time, inference in BNs is easy Recipe: State the marginal probabilities you need Figure out ALL the atomic probabilities you need Calculate and combine them Example: B E A J M Given that john and mary called, what is the probability that there was a burglary?
6
Example: Enumeration In this simple method, we only need the BN to synthesize the joint entries This has 4 terms because there are two binary variables that we are marginalizing over… we need all combinations of +e, e, and +a, -a
7
Inference by Enumeration?
8
Variable Elimination Why is inference by enumeration on a Bayes Net inefficient? You join up the whole joint distribution before you sum out the hidden variables You end up repeating a lot of work! Idea: interleave joining and marginalizing! Called “Variable Elimination” Choosing the order to eliminate variables that minimizes work is NP-hard, but *anything* sensible is much faster than inference by enumeration We’ll need some new notation to define VE IDEA: Once you have a Bayes NET, and some set of observed variables, and a QUERY node, we can progressively SIMPLIFY the bayes net. The simplifications are marginalizing and joining.
9
5-minute quiz
10
Factor Zoo I Joint distribution: P(X,Y) Selected joint: P(x,Y)
Entries P(x,y) for all x, y Sums to 1 Selected joint: P(x,Y) A slice of the joint distribution Entries P(x,y) for fixed x, all y Sums to P(x) T W P hot sun 0.4 rain 0.1 cold 0.2 0.3 Bottom: Sums to P(x). That how we would compute P(x) if we needed to! T W P cold sun 0.2 rain 0.3
11
Factor Zoo II Family of conditionals: P(X |Y)
Multiple conditionals Entries P(x | y) for all x, y Sums to |Y| Single conditional: P(X | y) Entries P(x | y) for fixed y, all x Sums to 1 T W P hot sun 0.8 rain 0.2 cold 0.4 0.6 Sums to Y because we need P( X | y1), P(X | y2), … T W P cold sun 0.4 rain 0.6
12
Factor Zoo III In general, when we write P(Y1 … YN | X1 … XM)
Specified family: P(y | X) Entries P(y | x) for fixed y, but for all x Sums to … who knows! In general, when we write P(Y1 … YN | X1 … XM) It is a “factor,” a multi-dimensional array Its values are all P(y1 … yN | x1 … xM) Any assigned X or Y is a dimension missing (selected) from the array T W P hot rain 0.2 cold 0.6
13
Example: Traffic Domain
Random Variables R: Raining T: Traffic L: Late for class! First query: P(L) R +r 0.1 -r 0.9 T +r +t 0.8 -t 0.2 -r 0.1 0.9 L What would it take to solve for P(L) by enumeration? Build all 2^3 rows of the joint table. Sum all examples of P(+L) and P(-L). +t +l 0.3 -l 0.7 -t 0.1 0.9
14
Variable Elimination Outline
Track objects called factors Initial factors are local CPTs (one per node) Any known values are selected E.g. if we know , the initial factors are VE: Alternately join factors and eliminate variables +r 0.1 -r 0.9 +r +t 0.8 -t 0.2 -r 0.1 0.9 +t +l 0.3 -l 0.7 -t 0.1 0.9 +r 0.1 -r 0.9 +r +t 0.8 -t 0.2 -r 0.1 0.9 +t +l 0.3 -t 0.1
15
Operation 1: Join Factors
First basic operation: joining factors Combining factors: Just like a database join Get all factors over the joining variable Build a new factor over the union of the variables involved Example: Join on R Computation for each entry: pointwise products R +r 0.1 -r 0.9 +r +t 0.8 -t 0.2 -r 0.1 0.9 +r +t 0.08 -t 0.02 -r 0.09 0.81 Merging merges your tables and your childrens tables. R,T T
16
Example: Multiple Joins
+r 0.1 -r 0.9 R Join R R, T +r +t 0.08 -t 0.02 -r 0.09 0.81 T +r +t 0.8 -t 0.2 -r 0.1 0.9 L L +t +l 0.3 -l 0.7 -t 0.1 0.9 +t +l 0.3 -l 0.7 -t 0.1 0.9
17
Example: Multiple Joins
R, T, L +r +t 0.08 -t 0.02 -r 0.09 0.81 R, T Join T +r +t +l 0.024 -l 0.056 -t 0.002 0.018 -r 0.027 0.063 0.081 0.729 L +t +l 0.3 -l 0.7 -t 0.1 0.9
18
Operation 2: Eliminate Second basic operation: marginalization
Take a factor and sum out a variable Shrinks a factor to a smaller one A projection operation Example: +r +t 0.08 -t 0.02 -r 0.09 0.81 +t 0.17 -t 0.83
19
Multiple Elimination Sum Sum out T out R R, T, L T, L L 0.024 0.056
0.002 0.018 -r 0.027 0.063 0.081 0.729 +t +l 0.051 -l 0.119 -t 0.083 0.747 +l 0.134 -l 0.886
20
P(L) : Marginalizing Early!
0.1 -r 0.9 Sum out R Join R R +r +t 0.08 -t 0.02 -r 0.09 0.81 +t 0.17 -t 0.83 +r +t 0.8 -t 0.2 -r 0.1 0.9 T T R, T L L +t +l 0.3 -l 0.7 -t 0.1 0.9 +t +l 0.3 -l 0.7 -t 0.1 0.9 +t +l 0.3 -l 0.7 -t 0.1 0.9 L
21
Marginalizing Early (aka VE*)
T T, L L Join T Sum out T L +t 0.17 -t 0.83 +t +l 0.051 -l 0.119 -t 0.083 0.747 +l 0.134 -l 0.886 +t +l 0.3 -l 0.7 -t 0.1 0.9 * VE is variable elimination
22
Evidence If evidence, start with factors that select that evidence
If there is no evidence, then use these initial factors: Computing , the initial factors become: We eliminate all vars other than query + evidence +r 0.1 -r 0.9 +r +t 0.8 -t 0.2 -r 0.1 0.9 +t +l 0.3 -l 0.7 -t 0.1 0.9 +r 0.1 +r +t 0.8 -t 0.2 +t +l 0.3 -l 0.7 -t 0.1 0.9
23
Evidence II Result will be a selected joint of query and evidence
E.g. for P(L | +r), we’d end up with: To get our answer, just normalize this! That’s it! Normalize +r +l 0.026 -l 0.074 +l 0.26 -l 0.74
24
General Variable Elimination
Query: Start with initial factors: Local CPTs (but instantiated by evidence) While there are still hidden variables (not Q or evidence): Pick a hidden variable H Join all factors mentioning H Eliminate (sum out) H Join all remaining factors and normalize
25
Variable Elimination Bayes Rule
Start / Select Join on B Normalize B a, B B P +b 0.1 b 0.9 a A B P +a +b 0.08 b 0.09 A B P +a +b 8/17 b 9/17 B A P +b +a 0.8 b a 0.2 b 0.1 0.9
26
Example j,m B E A,j,m B E Choose A Do THIS ON THE BOARD.
WRITE AS BIG SUMMATION
27
Example B B Choose E j,m j,m,E Finish with B j,m,B Normalize
Doesn’t this seem like the worst order ever? Better would be to choose B, E first, then you never have a factor with 5 variables? Finish with B j,m,B Normalize
28
Variable Elimination What you need to know:
Should be able to run it on small examples, understand the factor creation / reduction flow Better than enumeration: saves time by marginalizing variables as soon as possible rather than at the end Order of variables matters, optimal order is NP hard to find. We will see special cases of VE later On tree-structured graphs, variable elimination runs in polynomial time, like tree-structured CSPs You’ll have to implement a tree-structured special case to track invisible ghosts (Project 4)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.