Download presentation
Presentation is loading. Please wait.
1
CS 188: Artificial Intelligence Fall 2007
Lecture 18: Bayes Nets III 10/30/2007 Dan Klein – UC Berkeley
2
Announcements Project shift: Contest is live
Project 4 moved back a little Instead, mega-mini-homework, worth 3x, graded Contest is live
3
Inference Inference: calculating some statistic from a joint probability distribution Examples: Posterior probability: Most likely explanation: L R B D T T’
4
Reminder: Alarm Network
5
Normalization Trick Normalize
6
Inference by Enumeration?
7
Nesting Sums Atomic inference is extremely slow!
Slightly clever way to save work: Move the sums as far right as possible Example:
8
Evaluation Tree View the nested sums as a computation tree:
Still repeated work: calculate P(m | a) P(j | a) twice, etc.
9
Variable Elimination: Idea
Lots of redundant work in the computation tree We can save time if we cache all partial results Join on one hidden variable at a time Project out that variable immediately This is the basic idea behind variable elimination
10
Basic Objects Track objects called factors
Initial factors are local CPTs During elimination, create new factors Anatomy of a factor: 4 numbers, one for each value of D and E Argument variables, always non-evidence variables Variables introduced Variables summed out
11
Basic Operations First basic operation: join factors
Combining two factors: Just like a database join Build a factor over the union of the domains Example:
12
Basic Operations Second basic operation: marginalization
Take a factor and sum out a variable Shrinks a factor to a smaller one A projection operation Example:
13
Example
14
Example
15
General Variable Elimination
Query: Start with initial factors: Local CPTs (but instantiated by evidence) While there are still hidden variables (not Q or evidence): Pick a hidden variable H Join all factors mentioning H Project out H Join all remaining factors and normalize
16
Example Choose A
17
Example Choose E Finish Normalize
18
Variable Elimination What you need to know:
VE caches intermediate computations Polynomial time for tree-structured graphs! Saves time by marginalizing variables ask soon as possible rather than at the end We will see special cases of VE later You’ll have to implement the special cases Approximations Exact inference is slow, especially when you have a lot of hidden nodes Approximate methods give you a (close) answer, faster
19
Sampling Basic idea: Outline:
Draw N samples from a sampling distribution S Compute an approximate posterior probability Show this converges to the true probability P Outline: Sampling from an empty network Rejection sampling: reject samples disagreeing with evidence Likelihood weighting: use evidence to weight samples
20
Prior Sampling Cloudy Cloudy Sprinkler Sprinkler Rain Rain WetGrass
21
Prior Sampling This process generates samples with probability
…i.e. the BN’s joint probability Let the number of samples of an event be Then I.e., the sampling procedure is consistent
22
Example We’ll get a bunch of samples from the BN:
c, s, r, w c, s, r, w c, s, r, w c, s, r, w If we want to know P(W) We have counts <w:4, w:1> Normalize to get P(W) = <w:0.8, w:0.2> This will get closer to the true distribution with more samples Can estimate anything else, too What about P(C| r)? P(C| r, w)? Cloudy Sprinkler Rain WetGrass C S R W
23
Rejection Sampling Let’s say we want P(C) Let’s say we want P(C| s)
No point keeping all samples around Just tally counts of C outcomes Let’s say we want P(C| s) Same thing: tally C outcomes, but ignore (reject) samples which don’t have S=s This is rejection sampling It is also consistent (correct in the limit) Cloudy Sprinkler Rain WetGrass C S R W c, s, r, w c, s, r, w c, s, r, w c, s, r, w
24
Likelihood Weighting Problem with rejection sampling:
If evidence is unlikely, you reject a lot of samples You don’t exploit your evidence as you sample Consider P(B|a) Idea: fix evidence variables and sample the rest Problem: sample distribution not consistent! Solution: weight by probability of evidence given parents Burglary Alarm Burglary Alarm
25
Likelihood Sampling Cloudy Cloudy Sprinkler Sprinkler Rain Rain
WetGrass WetGrass
26
Likelihood Weighting Sampling distribution if z sampled and e fixed evidence Now, samples have weights Together, weighted sampling distribution is consistent Cloudy Rain C S R W
27
Likelihood Weighting Note that likelihood weighting doesn’t solve all our problems Rare evidence is taken into account for downstream variables, but not upstream ones A better solution is Markov-chain Monte Carlo (MCMC), more advanced We’ll return to sampling for robot localization and tracking in dynamic BNs Cloudy Rain C S R W
28
Decision Networks MEU: choose the action which maximizes the expected utility given the evidence Can directly operationalize this with decision diagrams Bayes nets with nodes for utility and actions Lets us calculate the expected utility for each action New node types: Chance nodes (just like BNs) Actions (rectangles, must be parents, act as observed evidence) Utilities (depend on action and chance nodes) Umbrella U Weather Report
29
Decision Networks Action selection: Instantiate all evidence
Calculate posterior over parents of utility node Set action node each possible way Calculate expected utility for each action Choose maximizing action Umbrella U Weather Report
30
Example: Decision Networks
Umbrella U Weather A W U(A,W) leave sun 100 rain take 20 70 W P(W) sun 0.7 rain 0.3
31
Example: Decision Networks
Umbrella W P(W) sun 0.7 rain 0.3 U Weather A W U(A,W) leave sun 100 rain take 20 70 R P(R|sun) clear 0.5 cloudy Report R P(R|rain) clear 0.2 cloud 0.8
32
Value of Information Idea: compute value of acquiring each possible piece of evidence Can be done directly from decision network Example: buying oil drilling rights Two blocks A and B, exactly one has oil, worth k Prior probabilities 0.5 each, mutually exclusive Current price of each block is k/2 Probe gives accurate survey of A. Fair price? Solution: compute value of information = expected value of best action given the information minus expected value of best action without information Survey may say “oil in A” or “no oil in A,” prob 0.5 each = [0.5 * value of “buy A” given “oil in A”] + [0.5 * value of “buy B” given “no oil in A”] – 0 = [0.5 * k/2] + [0.5 * k/2] - 0 = k/2 DrillLoc U OilLoc
33
General Formula Current evidence E=e, possible utility inputs s
Potential new evidence E’: suppose we knew E’ = e’ BUT E’ is a random variable whose value is currently unknown, so: Must compute expected gain over all possible values (VPI = value of perfect information)
34
VPI Properties Nonnegative in expectation
Nonadditive ---consider, e.g., obtaining Ej twice Order-independent
35
VPI Example Umbrella U Weather Report
36
VPI Scenarios Imagine actions 1 and 2, for which U1 > U2
How much will information about Ej be worth? Little – we’re sure action 1 is better. A lot – either could be much better Little – info likely to change our action but not our utility
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.