Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 4/527: Artificial Intelligence

Similar presentations


Presentation on theme: "CS 4/527: Artificial Intelligence"— Presentation transcript:

1 CS 4/527: Artificial Intelligence
Bayes’ Nets: Sampling Please retain proper attribution, including the reference to ai.berkeley.edu. Thanks! Instructor: Jared Saia--- University of New Mexico [These slides were created by Dan Klein, Pieter Abbeel, and Anca.

2 Bayes’ Net Representation
A directed, acyclic graph, one node per random variable A conditional probability table (CPT) for each node A collection of distributions over X, one for each combination of parents’ values

3 Bayes’ Net Representation
Bayes’ nets implicitly encode joint distributions As a product of local conditional distributions To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together: Less complex than chain rule (valid for all distributions):

4 Example: Alarm Network
B P(B) +b 0.001 -b 0.999 B E E P(E) +e 0.002 -e 0.998 A A J P(J|A) +a +j 0.9 -j 0.1 -a 0.05 0.95 A M P(M|A) +a +m 0.7 -m 0.3 -a 0.01 0.99 B E A P(A|B,E) +b +e +a 0.95 -a 0.05 -e 0.94 0.06 -b 0.29 0.71 0.001 0.999 J M demo

5 Example: Alarm Network
B P(B) +b 0.001 -b 0.999 B E E P(E) +e 0.002 -e 0.998 A A J P(J|A) +a +j 0.9 -j 0.1 -a 0.05 0.95 A M P(M|A) +a +m 0.7 -m 0.3 -a 0.01 0.99 B E A P(A|B,E) +b +e +a 0.95 -a 0.05 -e 0.94 0.06 -b 0.29 0.71 0.001 0.999 J M demo

6 Inference General case: We want: Evidence variables: Query* variable:
* Works fine with multiple query variables, too General case: Evidence variables: Query* variable: Hidden variables: We want: All variables

7 Inference by Enumeration
J M

8 Inference by Enumeration
J M

9 Variable Elimination B E A J M

10 Variable Elimination as Equations
marginal can be obtained from joint by summing out use Bayes’ net joint distribution expression use x*(y+z) = xy + xz joining on a, and then summing out gives f1 use x*(y+z) = xy + xz joining on e, and then summing out gives f2

11 Variable Elimination as Procedure
Choose A

12 Variable Elimination as Procedure
Choose E Finish with B Normalize

13 Variable Elimination Ordering
For the query P(Xn|y1,…,yn) work through the following two different orderings as done in previous slide: Z, X1, …, Xn-1 and X1, …, Xn-1, Z. What is the size of the maximum factor generated for each of the orderings? Answer: 2n versus 2 (assuming binary) In general: the ordering can greatly affect efficiency.

14 Variable Elimination Interleave joining and marginalizing
dk entries computed for a factor over k variables with domain sizes d Ordering of elimination of hidden variables can affect size of factors generated Worst case: running time exponential in the size of the Bayes’ net

15 Approximate Inference: Sampling

16 Sampling Basic idea Why sample?
Draw N samples from a sampling distribution S Compute an approximate posterior probability Show this converges to the true probability P Why sample? Inference: getting a sample is faster than computing the right answer (e.g. with variable elimination)

17 Sampling Basics Sampling from given distribution Example C P(C) red
Step 1: Get sample u from uniform distribution over [0, 1) E.g. random() in python Step 2: Convert this sample u into an outcome for the given distribution by having each outcome associated with a sub-interval of [0,1) with sub-interval size equal to probability of the outcome Example If random() returns u = 0.83, then our sample is C = blue E.g, after sampling 8 times: C P(C) red 0.6 green 0.1 blue 0.3

18 Sampling in Bayes’ Nets
Prior Sampling Rejection Sampling Likelihood Weighting Gibbs Sampling

19 Prior Sampling

20 Prior Sampling Ignore evidence. Sample from the joint probability.
Do inference by counting the right samples.

21 Prior Sampling Samples: +c, -s, +r, +w -c, +s, -r, +w … C, S, R, W
C, R, S, W +c 0.5 -c Cloudy Cloudy +c +s 0.1 -s 0.9 -c 0.5 +c +r 0.8 -r 0.2 -c Sprinkler Sprinkler Rain Rain WetGrass WetGrass Samples: +s +r +w 0.99 -w 0.01 -r 0.90 0.10 -s +c, -s, +r, +w -c, +s, -r, +w

22 Prior Sampling For i=1, 2, …, n Return (x1, x2, …, xn)
Sample xi from P(Xi | Parents(Xi)) Return (x1, x2, …, xn)

23 Example We’ll get a bunch of samples from the BN:
+c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w -c, -s, -r, +w If we want to know P(W) We have counts <+w:4, -w:1> Normalize to get P(W) = <+w:0.8, -w:0.2> This will get closer to the true distribution with more samples Can estimate anything else, too What about P(C| +w)? P(C| +r, +w)? P(C| -r, -w)? Fast: can use fewer samples if less time (what’s the drawback?) S R W C

24 Prior Sampling Analysis
This process generates samples with probability: …i.e. the BN’s joint probability Let the number of samples of an event be Then I.e., the sampling procedure is consistent

25 Rejection Sampling

26 Rejection Sampling Let’s say we want P(C| +s)
Tally C outcomes, but ignore (reject) samples which don’t have S=+s This is called rejection sampling It is also consistent for conditional probabilities (i.e., correct in the limit) S R W C +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w -c, -s, -r, +w

27 Rejection Sampling Let’s say we want P(C| +s)
Tally C outcomes, but ignore (reject) samples which don’t have S=+s This is called rejection sampling It is also consistent for conditional probabilities (i.e., correct in the limit) S R W C +c, -s, +c, +s, +r, +w -c, +s, +r, -w -c, -s,

28 Rejection Sampling IN: evidence instantiation For i=1, 2, …, n
Sample xi from P(Xi | Parents(Xi)) If xi not consistent with evidence Reject: Return, and no sample is generated in this cycle Return (x1, x2, …, xn)

29 Likelihood Weighting

30 Likelihood Weighting Problem with rejection sampling:
If evidence is unlikely, rejects lots of samples Evidence not exploited as you sample Consider P(Shape|blue) Idea: fix evidence variables and sample the rest Problem: sample distribution not consistent! Solution: weight by probability of evidence given parents pyramid, green pyramid, red sphere, blue cube, red sphere, green pyramid, blue sphere, blue cube, blue sphere, blue Shape Color Shape Color

31 Likelihood Weighting Samples: +c, +s, +r, +w … Cloudy Cloudy Sprinkler
0.5 -c Cloudy Cloudy +c +s 0.1 -s 0.9 -c 0.5 +c +r 0.8 -r 0.2 -c Sprinkler Sprinkler Rain Rain WetGrass WetGrass Samples: +s +r +w 0.99 -w 0.01 -r 0.90 0.10 -s +c, +s, +r, +w

32 Likelihood Weighting IN: evidence instantiation w = 1.0
for i=1, 2, …, n if Xi is an evidence variable Xi = observation xi for Xi Set w = w * P(xi | Parents(Xi)) else Sample xi from P(Xi | Parents(Xi)) return (x1, x2, …, xn), w

33 Likelihood Weighting Sampling distribution if z sampled and e fixed evidence Now, samples have weights Together, weighted sampling distribution is consistent Cloudy R C S W

34 Estimating Probabilities
If we want to know P(R|+s,+w): same as before, we have samples for +r and –r instead of just counting, take their weights into account!

35 Likelihood Weighting Likelihood weighting is good
We have taken evidence into account as we generate the sample E.g. here, W’s value will get picked based on the evidence values of S, R More of our samples will reflect the state of the world suggested by the evidence Likelihood weighting doesn’t solve all our problems Evidence influences the choice of downstream variables, but not upstream ones (C isn’t more likely to get a value matching the evidence) We would like to consider evidence when we sample every variable  Gibbs sampling

36 Gibbs Sampling

37 Gibbs Sampling Procedure: keep track of a full instantiation x1, x2, …, xn. Start with an arbitrary instantiation consistent with the evidence. Sample one variable at a time, conditioned on all the rest, but keep evidence fixed. Keep repeating this for a long time. Property: in the limit of repeating this infinitely many times the resulting sample is coming from the correct distribution Rationale: both upstream and downstream variables condition on evidence. In contrast: likelihood weighting only conditions on upstream evidence, and hence weights obtained in likelihood weighting can sometimes be very small. Sum of weights over all samples is indicative of how many “effective” samples were obtained, so want high weight.

38 Gibbs Sampling Example: P( S | +r)
Step 1: Fix evidence R = +r Steps 3: Repeat Choose a non-evidence variable X Resample X from P( X | all other variables) Step 2: Initialize other variables Randomly S +r W C S +r W C S +r W C S +r W C S +r W C S +r W C S +r W C S +r W C

39 Efficient Resampling of One Variable
Sample from P(S | +c, +r, -w) Many things cancel out – only CPTs with S remain! More generally: only CPTs that have resampled variable need to be considered, and joined together S +r W C

40 Bayes’ Net Sampling Summary
Prior Sampling P Likelihood Weighting P( Q | e) Rejection Sampling P( Q | e ) Gibbs Sampling P( Q | e )

41 Further Reading on Gibbs Sampling*
Gibbs sampling produces sample from the query distribution P( Q | e ) in limit of re-sampling infinitely often Gibbs sampling is a special case of more general methods called Markov chain Monte Carlo (MCMC) methods Metropolis-Hastings is one of the more famous MCMC methods (in fact, Gibbs sampling is a special case of Metropolis-Hastings) You may read about Monte Carlo methods – they’re just sampling


Download ppt "CS 4/527: Artificial Intelligence"

Similar presentations


Ads by Google