Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bayesian Networks Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,

Similar presentations


Presentation on theme: "Bayesian Networks Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,"— Presentation transcript:

1 Bayesian Networks Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew Moore, Percy Liang, Luke Zettlemoyer, Rob Pless, Killian Weinberger, Deva Ramanan 1

2 Announcements Some students in the back are having trouble hearing the lecture due to talking. Please respect your fellow students. If you have a question or comment relevant to the course please share with all of us. Otherwise, don’t talk during lecture. Also, if you are having trouble hearing in the back there are plenty of seats further forward.

3 Reminder HW3 was released 2/27 –Written questions only (no programming) –Due Tuesday, 3/18, 11:59pm

4 From last class

5 Random Variables Random variables be a realization ofLet A random variable is some aspect of the world about which we (may) have uncertainty. Random variables can be: Binary (e.g. {true,false}, {spam/ham}), Take on a discrete set of values (e.g. {Spring, Summer, Fall, Winter}), Or be continuous (e.g. [0 1]).

6 Joint Probability Distribution Random variables Joint Probability Distribution: be a realization ofLet Also written Gives a real value for all possible assignments.

7 Queries Joint Probability Distribution: Also written Given a joint distribution, we can reason about unobserved variables given observations (evidence): Stuff you care aboutStuff you already know

8 Main kinds of models Undirected (also called Markov Random Fields) - links express constraints between variables. Directed (also called Bayesian Networks) - have a notion of causality -- one can regard an arc from A to B as indicating that A "causes" B.

9 Syntax  Directed Acyclic Graph (DAG)  Nodes: random variables  Can be assigned (observed) or unassigned (unobserved)  Arcs: interactions  An arrow from one variable to another indicates direct influence  Encode conditional independence  Weather is independent of the other variables  Toothache and Catch are conditionally independent given Cavity  Must form a directed, acyclic graph Weather Cavity ToothacheCatch

10 Bayes Nets Directed Graph, G = (X,E) Nodes Edges Each node is associated with a random variable

11 Example

12 Joint Distribution By Chain Rule (using the usual arithmetic ordering)

13 Directed Graphical Models Directed Graph, G = (X,E) Nodes Edges Each node is associated with a random variable Definition of joint probability in a graphical model: where are the parents of

14 Example Joint Probability:

15 Example 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 0 0 1 01 0 1

16 Size of a Bayes’ Net How big is a joint distribution over N Boolean variables? 2N2N How big is an N-node net if nodes have up to k parents? O(N * 2 k+1 ) Both give you the power to calculate BNs: Huge space savings! Also easier to elicit local CPTs Also turns out to be faster to answer queries 16

17 The joint probability distribution  For example, P(j, m, a, ¬b, ¬e) = P(¬b) P(¬e) P(a | ¬b, ¬e) P(j | a) P(m | a)

18 Independence in a BN Important question about a BN: –Are two nodes independent given certain evidence? –If yes, can prove using algebra (tedious in general) –If no, can prove with a counter example –Example: –Question: are X and Z necessarily independent? Answer: no. Example: low pressure causes rain, which causes traffic. X can influence Z, Z can influence X (via Y) Addendum: they could be independent: how? XYZ

19 Causal Chains This configuration is a “causal chain” –Is Z independent of X given Y? –Evidence along the chain “blocks” the influence XYZ Yes! X: Project due Y: No office hours Z: Students panic 19

20 Common Cause Another basic configuration: two effects of the same cause –Are X and Z independent? –Are X and Z independent given Y? –Observing the cause blocks influence between effects. X Y Z Yes! Y: Homework due X: Full attendance Z: Students sleepy 20

21 Common Effect Last configuration: two causes of one effect (v-structures) –Are X and Z independent? Yes: the ballgame and the rain cause traffic, but they are not correlated Still need to prove they must be (try it!) –Are X and Z independent given Y? No: seeing traffic puts the rain and the ballgame in competition as explanation –This is backwards from the other cases Observing an effect activates influence between possible causes. X Y Z X: Raining Z: Ballgame Y: Traffic 21

22 The General Case Any complex example can be analyzed using these three canonical cases General question: in a given BN, are two variables independent (given evidence)? Solution: analyze the graph 22 Causal Chain Common Cause (Unobserved) Common Effect

23 Bayes Ball Shade all observed nodes. Place balls at the starting node, let them bounce around according to some rules, and ask if any of the balls reach any of the goal node. We need to know what happens when a ball arrives at a node on its way to the goal node. 23

24 24

25 Example Yes 25 R T B T’T’

26 Bayesian decision making Suppose the agent has to make decisions about the value of an unobserved query variable X based on the values of an observed evidence variable E Inference problem: given some evidence E = e, what is P(X | e)? Learning problem: estimate the parameters of the probabilistic model P(X | E) given training samples {(x 1,e 1 ), …, (x n,e n )}

27 Inference Graphs can have observed (shaded) and unobserved nodes. If nodes are always unobserved they are called hidden or latent variables Probabilistic inference is the problem of computing a conditional probability distribution over the values of some of the nodes (the “hidden” or “unobserved” nodes), given the values of other nodes (the “evidence” or “observed” nodes).

28 Probabilistic inference  A general scenario:  Query variables: X  Evidence (observed) variables: E = e  Unobserved variables: Y  If we know the full joint distribution P(X, E, Y), how can we perform inference about X?

29 Inference Inference: calculating some useful quantity from a joint probability distribution Examples: –Posterior probability: –Most likely explanation: 29 BE A JM

30 Inference – computing conditional probabilities Marginalization: Conditional Probabilities:

31 Inference by Enumeration Given unlimited time, inference in BNs is easy Recipe: –State the marginal probabilities you need –Figure out ALL the atomic probabilities you need –Calculate and combine them Example: 31 BE A JM

32 Example: Enumeration In this simple method, we only need the BN to synthesize the joint entries 32

33 Probabilistic inference  A general scenario:  Query variables: X  Evidence (observed) variables: E = e  Unobserved variables: Y  If we know the full joint distribution P(X, E, Y), how can we perform inference about X?  Problems  Full joint distributions are too large  Marginalizing out Y may involve too many summation terms

34 Inference by Enumeration? 34

35 Variable Elimination Why is inference by enumeration on a Bayes Net inefficient? –You join up the whole joint distribution before you sum out the hidden variables –You end up repeating a lot of work! Idea: interleave joining and marginalizing! –Called “Variable Elimination” –Choosing the order to eliminate variables that minimizes work is NP-hard, but *anything* sensible is much faster than inference by enumeration 35

36 General Variable Elimination Query: Start with initial factors: –Local CPTs (but instantiated by evidence) While there are still hidden variables (not Q or evidence): –Pick a hidden variable H –Join all factors mentioning H –Eliminate (sum out) H Join all remaining factors and normalize 36

37 37 Example: Variable elimination Query: What is the probability that a student attends class, given that they pass the exam? [based on slides taken from UMBC CMSC 671, 2005] P(pr|at,st)atst 0.9TT 0.5TF 0.7FT 0.1FF attendstudy prepared fair pass P(at)=.8 P(st)=.6 P(fa)=.9 P(pa|at,pre,fa)pratfa 0.9TTT 0.1TTF 0.7TFT 0.1TFF 0.7FTT 0.1FTF 0.2FFT 0.1FFF

38 38 Join study factors attendstudy prepared fair pass P(at)=.8 P(st)=.6 P(fa)=.9 OriginalJointMarginal prepstudyattendP(pr|at,st)P(st)P(pr,st|sm)P(pr|sm) TTT0.90.60.540.74 TFT0.50.40.2 TTF0.70.60.420.46 TFF0.10.40.04 FTT0.10.60.060.26 FFT0.50.40.2 FTF0.30.60.180.54 FFF0.90.40.36 P(pa|at,pre,fa)pratfa 0.9TTT 0.1TTF 0.7TFT 0.1TFF 0.7FTT 0.1FTF 0.2FFT 0.1FFF

39 39 Marginalize out study attend prepared, study fair pass P(at)=.8 P(fa)=.9 OriginalJointMarginal prepstudyattendP(pr|at,st)P(st)P(pr,st|at)P(pr|at) TTT0.90.60.540.74 TFT0.50.40.2 TTF0.70.60.420.46 TFF0.10.40.04 FTT0.10.60.060.26 FFT0.50.40.2 FTF0.30.60.180.54 FFF0.90.40.36 P(pa|at,pre,fa)pratfa 0.9TTT 0.1TTF 0.7TFT 0.1TFF 0.7FTT 0.1FTF 0.2FFT 0.1FFF

40 40 Remove “study” attend prepared fair pass P(at)=.8 P(fa)=.9 P(pr|at)prat 0.74TT 0.46TF 0.26FT 0.54FF P(pa|at,pre,fa)pratfa 0.9TTT 0.1TTF 0.7TFT 0.1TFF 0.7FTT 0.1FTF 0.2FFT 0.1FFF

41 41 Join factors “fair” attend prepared fair pass P(at)=.8 P(fa)=.9 P(pr|at)prepattend 0.74TT 0.46TF 0.26FT 0.54FF OriginalJointMarginal papreattendfair P(pa|at,pre, fa)P(fair) P(pa,fa|sm, pre) P(pa|sm,pre ) tTTT0.9 0.810.82 tTTF0.1 0.01 tTFT0.70.90.630.64 tTFF0.1 0.01 tFTT0.70.90.630.64 tFTF0.1 0.01 tFFT0.20.90.180.19 tFFF0.1 0.01

42 42 Marginalize out “fair” attend prepared pass, fair pass, fair P(at)=.8 P(pr|at)prepattend 0.74TT 0.46TF 0.26FT 0.54FF OriginalJointMarginal papreattendfairP(pa|at,pre,fa)P(fair)P(pa,fa|at,pre)P(pa|at,pre) TTTT0.9 0.810.82 TTTF0.1 0.01 TTFT0.70.90.630.64 TTFF0.1 0.01 TFTT0.70.90.630.64 TFTF0.1 0.01 TFFT0.20.90.180.19 TFFF0.1 0.01

43 43 Marginalize out “fair” attend prepared pass P(at)=.8 P(pr|at)prepattend 0.74TT 0.46TF 0.26FT 0.54FF P(pa|at,pre)papreattend 0.82tTT 0.64tTF tFT 0.19tFF

44 44 Join factors “prepared” attend prepared pass P(at)=.8 OriginalJointMarginal papreattendP(pa|at,pr)P(pr|at)P(pa,pr|sm)P(pa|sm) tTT0.820.740.60680.7732 tTF0.640.460.29440.397 tFT0.640.260.1664 tFF0.190.540.1026

45 45 Join factors “prepared” attend pass, prepared P(at)=.8 OriginalJointMarginal papreattendP(pa|at,pr)P(pr|at)P(pa,pr|at)P(pa|at) tTT0.820.740.60680.7732 tTF0.640.460.29440.397 tFT0.640.260.1664 tFF0.190.540.1026

46 46 Join factors “prepared” attend pass P(at)=.8 P(pa|at)paattend 0.7732tT 0.397tF

47 47 Join factors attend pass P(at)=.8 OriginalJointNormalized: paattendP(pa|at)P(at)P(pa,sm)P(at|pa) TT0.77320.80.618560.89 TF0.3970.20.07940.11

48 48 Join factors attend, pass OriginalJointNormalized: paattendP(pa|at)P(at)P(pa,at)P(at|pa) TT0.77320.80.618560.89 TF0.3970.20.07940.11

49 Bayesian network inference: Big picture Exact inference is intractable –There exist techniques to speed up computations, but worst-case complexity is still exponential except in some classes of networks Approximate inference –Sampling, variational methods, message passing / belief propagation…

50 Approximate Inference Sampling (particle based method) 50

51 Approximate Inference 51

52 Sampling – the basics... Scrooge McDuck gives you an ancient coin. He wants to know what is P(H) You have no homework, and nothing good is on television – so you toss it 1 Million times. You obtain 700000x Heads, and 300000x Tails. What is P(H)? 52

53 Sampling – the basics... Exactly, P(H)=0.7 Why? 53

54 Monte Carlo Method 54 Who is more likely to win? Green or Purple? What is the probability that green wins, P(G)? Two ways to solve this: 1.Compute the exact probability. 2.Play 100,000 games and see how many times green wins.

55 Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it’s really simple Basic idea: –Draw N samples from a sampling distribution S –Compute an approximate posterior probability –Show this converges to the true probability P Why sample? –Learning: get samples from a distribution you don’t know –Inference: getting a sample is faster than computing the right answer (e.g. with variable elimination) 55 S A F

56 Forward Sampling Cloudy Sprinkler Rain WetGrass Cloudy Sprinkler Rain WetGrass 56 +c0.5 -c0.5 +c +s0.1 -s0.9 -c +s0.5 -s0.5 +c +r0.8 -r0.2 -c +r0.2 -r0.8 +s +r +w0.99 -w0.01 -r +w0.90 -w0.10 -s +r +w0.90 -w0.10 -r +w0.01 -w0.99 Samples: +c, -s, +r, +w -c, +s, -r, +w …

57 Forward Sampling This process generates samples with probability: …i.e. the BN’s joint probability Let the number of samples of an event be Then I.e., the sampling procedure is consistent 57

58 Example We’ll get a bunch of samples from the BN: +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w If we want to know P(W) –We have counts –Normalize to get P(W) = –This will get closer to the true distribution with more samples –Can estimate anything else, too –What about P(C| +w)? P(C| +r, +w)? P(C| -r, -w)? –Fast: can use fewer samples if less time (what’s the drawback?) Cloudy Sprinkler Rain WetGrass C S R W 58

59 Rejection Sampling Let’s say we want P(C) –No point keeping all samples around –Just tally counts of C as we go Let’s say we want P(C| +s) –Same thing: tally C outcomes, but ignore (reject) samples which don’t have S=+s –This is called rejection sampling –It is also consistent for conditional probabilities (i.e., correct in the limit) +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w Cloudy Sprinkler Rain WetGrass C S R W 59

60 Sampling Example There are 2 cups. –The first contains 1 penny and 1 quarter –The second contains 2 quarters Say I pick a cup uniformly at random, then pick a coin randomly from that cup. It's a quarter (yes!). What is the probability that the other coin in that cup is also a quarter?

61 Likelihood Weighting Problem with rejection sampling: –If evidence is unlikely, you reject a lot of samples –You don’t exploit your evidence as you sample –Consider P(B|+a) Idea: fix evidence variables and sample the rest Problem: sample distribution not consistent! Solution: weight by probability of evidence given parents BurglaryAlarm BurglaryAlarm 61 -b, -a +b, +a -b +a -b, +a +b, +a

62 Likelihood Weighting Sampling distribution if z sampled and e fixed evidence Now, samples have weights Together, weighted sampling distribution is consistent Cloudy R C S W 62

63 Likelihood Weighting 63 +c0.5 -c0.5 +c +s0.1 -s0.9 -c +s0.5 -s0.5 +c +r0.8 -r0.2 -c +r0.2 -r0.8 +s +r +w0.99 -w0.01 -r +w0.90 -w0.10 -s +r +w0.90 -w0.10 -r +w0.01 -w0.99 Samples: +c, +s, +r, +w … Cloudy Sprinkler Rain WetGrass Cloudy Sprinkler Rain WetGrass

64  Inference:  Sum over weights that match query value  Divide by total sample weight  What is P(C|+w,+r)? Likelihood Weighting Example 64 CloudyRainySprinklerWet GrassWeight 01110.495 00110.45 0011 0011 10110.09

65 Likelihood Weighting Likelihood weighting is good –We have taken evidence into account as we generate the sample –E.g. here, W’s value will get picked based on the evidence values of S, R –More of our samples will reflect the state of the world suggested by the evidence Likelihood weighting doesn’t solve all our problems –Evidence influences the choice of downstream variables, but not upstream ones (C isn’t more likely to get a value matching the evidence) We would like to consider evidence when we sample every variable 65 Cloudy Rain C S R W

66 Markov Chain Monte Carlo* Idea: instead of sampling from scratch, create samples that are each like the last one. Procedure: resample one variable at a time, conditioned on all the rest, but keep evidence fixed. E.g., for P(b|c): Properties: Now samples are not independent (in fact they’re nearly identical), but sample averages are still consistent estimators! What’s the point: both upstream and downstream variables condition on evidence. 66 +a+c+b +a+c-b-b-a-a -b-b

67 Gibbs Sampling 1.Set all evidence E to e 2.Do forward sampling to obtain x 1,...,x n 3.Repeat: 1.Pick any variable X i uniformly at random. 2.Resample x i ’ from p(X i | x 1,..., x i-1, x i+1,..., x n ) 3.Set all other x j ’=x j 4.The new sample is x 1’,..., x n ’ 67

68 Markov Blanket 68 X Markov blanket of X: 1.All parents of X 2.All children of X 3.All parents of children of X (except X itself) X is conditionally independent from all other variables in the BN, given all variables in the markov blanket (besides X).

69 Inference Algorithms Exact algorithms –Elimination algorithm –Sum-product algorithm –Junction tree algorithm Sampling algorithms –Importance sampling –Markov chain Monte Carlo Variational algorithms –Mean field methods –Sum-product algorithm and variations –Semidefinite relaxations

70 Summary Sampling can be your salvation The dominating approach to inference in BNs Approaches: –Forward (/Prior) Sampling –Rejection Sampling –Likelihood Weighted Sampling –Gibbs Sampling 70


Download ppt "Bayesian Networks Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,"

Similar presentations


Ads by Google