Presentation is loading. Please wait.

Presentation is loading. Please wait.

Generalizing Variable Elimination in Bayesian Networks 서울 시립대학원 전자 전기 컴퓨터 공학과 G20124901 박민규.

Similar presentations


Presentation on theme: "Generalizing Variable Elimination in Bayesian Networks 서울 시립대학원 전자 전기 컴퓨터 공학과 G20124901 박민규."— Presentation transcript:

1 Generalizing Variable Elimination in Bayesian Networks 서울 시립대학원 전자 전기 컴퓨터 공학과 G20124901 박민규

2 Background knowledge

3 Bayes’ theorem P(A,B) = P(A|B) P(B) = P(B|A) P(A) Bayes’ theorem

4 Bayesian Networks Probabilistic graphic model that represents a set of random variables and their conditional dependencies via a directed acyclic graph.

5 Bayesian Networks Independence assumption –If there are n random variables,the complete distribution is specified by 2^n -1 joint probabilities –n=5. 2^n-1 =31.But we needed only 10 values. If n=10, we need 21 values. –Bayesian Networks have built in independence assumptions.

6 Bayesian Networks Independence assumption –d-Separation d-separation is a graphical test of independence between variables in a directed acyclic graph.

7 Bayesian Networks Independence assumption –Q: Is BP dependent on FO? Converging Node

8 Bayesian Networks Independence assumption –Q: Is BP independent on FO? Evidence

9 Bayesian Networks Independence assumption –joint distribution <- Chain rule –Marginal independence A ⊥ B ⇔ P(A|B) = P(A), P(B|A) = P(B) –Conditional independence A ⊥ B|C ⇔ P(A|B,C) = P(A|C), P(B|A,C) = P(B|C)

10 Paper

11 Bayesian Networks

12 Each node of this graph represents a random variable X i in X. The parents of X i are denoted by pa(X i ); the children of X i are denoted by ch(X i ). The parents of children of X i that are not children themselves are denoted by spo(X i ) – these are the “spouses” of X i in the polygamy society of Bayesian networks.

13 Bayesian Networks The semantics of Bayesian network model is determined by the Markov condition: Every variable is independent of its non-descendants non-parents given its parents. This condition leads to a unique joint probability density.

14 Bayesian Networks A set of query variables X q The posterior probability of X q given E is:

15 Standard variable elimination Variable elimination is an algebraic scheme for inference in Bayesian networks. Variable and bucket elimination are essentially identical. –The basic principles of bucket manipulation have been studied extensively by Dechter.

16 Standard variable elimination Variables in X q necessarily belong to X R but not all observed variables belong to X R –X R : the set of requisite variables where probability densities must be restricted to domains containing no evidence.

17 Standard variable elimination Denote by N the number of requisite variables that are not observed and are not in X q. Now, suppose that these variables are ordered in some special way, so we have an ordering {X 1, X 2, X 3,…, X N }.

18 Standard variable elimination Because X1 can only appear in densities p(X j | pa(X j )) for X j ∈ {X 1, ch(X 1 )}, we can move the summation for X 1 :

19 Standard variable elimination At this point, we have “used” the densities for X j ∈ {X 1, ch(X 1 )}. To visualize operations more clearly, we can define the following un-normalized density.

20 Standard variable elimination Think of the various densities as living in a “pool” of densities. We collect the densities that contain X 1 take them off of the pool construct the a new (un-normalized) density p(ch(X 1 | pa(X 1 ) | pa(X 1 ), spo(X 1 )) and add this density to the pool.

21 Generalizing variable elimination Junction tree algorithms Bucket elimination algorithms Updating procedure –Updating buckets right above the root –Updating buckets away from the root

22 JavaBayes

23 GUI

24 Edit

25 DogProblem

26 Query p(d) = p(d ∩ b ∩ f) + p(d ∩ b c ∩ f c ) + p(d ∩ b c ∩ f) + p(d ∩ b ∩ f c ) = p(d | b, f)p(f)p(b) + p(d | b c, f c )p(f c )p(b c ) + p(d | b c, f)p(f)p(b c ) + p(d | b, f c )p(f c )p(b) = 0.99 * 0.15 * 0.01 + 0.30 * 0.85 * 0.99 + 0.90 * 0.99 * 0.15 + 0.97 * 0.85 * 0.01 = 0.001485 + 0.25245 + 0.13365 + 0.008245 = 0.39583

27 Observe true p(d) = p(d ∩ b ∩ f) + p(d ∩ b ∩ f c ) = p(d | b, f)p(f) + p(d | b, f c )p(f c ) = 0.99 * 0.15 + 0.97 * 0.85 = 0.1485 + 0.8245 = 0.973

28 감사합니다.


Download ppt "Generalizing Variable Elimination in Bayesian Networks 서울 시립대학원 전자 전기 컴퓨터 공학과 G20124901 박민규."

Similar presentations


Ads by Google