Download presentation
Presentation is loading. Please wait.
Published byCelia Streeter Modified over 9 years ago
1
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011
2
Summary of last time: Inference We presented the variable elimination algorithm –Specifically, VE for finding marginal P(X i ) over one variable, X i from X 1,…,X n –Order on variables such that –One variable X j eliminated at a time (a) Move unneeded terms (those not involving X j ) outside summation over X j (b) Create a new potential function, f Xj (.) over other variables appearing in the terms of the summation at (a) Works for both BNs and MFs (Markov Fields)
3
Today 1.Treewidth methods: 1.Variable elimination 2.Clique tree algorithm 3.Treewidth
4
Junction Tree Why junction tree? –Foundations for “Loopy Belief Propagation” approximate inference –More efficient for some tasks than VE –We can avoid cycles if we turn highly- interconnected subsets of the nodes into “supernodes” cluster Objective –Compute is a value of a variable and is evidence for a set of variable
5
Properties of Junction Tree An undirected tree Each node is a cluster (nonempty set) of variables Running intersection property: –Given two clusters and, all clusters on the path between and contain Separator sets (sepsets): –Intersection of the adjacent cluster ADEABD DEF ADDE Cluster ABD Sepset DE
6
Potentials Potentials: –Denoted by Marginalization –, the marginalization of into X Multiplication –, the multiplication of and
7
Properties of Junction Tree Belief potentials: –Map each instantiation of clusters or sepsets into a real number Constraints: –Consistency: for each cluster and neighboring sepset –The joint distribution
8
Properties of Junction Tree If a junction tree satisfies the properties, it follows that: –For each cluster (or sepset), –The probability distribution of any variable, using any cluster (or sepset) that contains
9
Building Junction Trees DAG Moral GraphTriangulated GraphJunction TreeIdentifying Cliques
10
Constructing the Moral Graph A B D C E G F H
11
Constructing The Moral Graph Add undirected edges to all co- parents which are not currently joined –Marrying parents A B D C E G F H
12
Constructing The Moral Graph Add undirected edges to all co- parents which are not currently joined –Marrying parents Drop the directions of the arcs A B D C E G F H
13
Triangulating An undirected graph is triangulated iff every cycle of length >3 contains an edge to connects two nonadjacent nodes A B D C E G F H
14
Identifying Cliques A clique is a subgraph of an undirected graph that is complete and maximal A B D C E G F H EGH ADEABD ACEDEF CEG
15
Junction Tree A junction tree is a subgraph of the clique graph that –is a tree –contains all the cliques –satisfies the running intersection property EGH ADEABD ACEDEF CEG ADE ABD ACE AD AE CEG CE DEF DE EGH EG
16
Principle of Inference DAG Junction Tree Inconsistent Junction Tree Initialization Consistent Junction Tree Propagation Marginalization
17
Example: Create Join Tree X1X2 Y1Y2 HMM with 2 time steps: Junction Tree: X1,X2 X1,Y1 X2,Y2 X1 X2
18
Example: Initialization Variable Associated Cluster Potential function X1X1,Y1 Y1X1,Y1 X2X1,X2 Y2X2,Y2 X1,X2 X1,Y1 X2,Y2 X1 X2
19
Example: Collect Evidence Choose arbitrary clique, e.g. X1,X2, where all potential functions will be collected. Call recursively neighboring cliques for messages: 1. Call X1,Y1. –1. Projection: –2. Absorption:
20
Example: Collect Evidence (cont.) 2. Call X2,Y2: –1. Projection: –2. Absorption: X1,X2 X1,Y1 X2,Y2 X1 X2
21
Example: Distribute Evidence Pass messages recursively to neighboring nodes Pass message from X1,X2 to X1,Y1: –1. Projection: –2. Absorption:
22
Example: Distribute Evidence (cont.) Pass message from X1,X2 to X2,Y2: –1. Projection: –2. Absorption: X1,X2 X1,Y1 X2,Y2 X1 X2
23
Example: Inference with evidence Assume we want to compute: P(X2|Y1=0,Y2=1) (state estimation) Assign likelihoods to the potential functions during initialization:
24
Example: Inference with evidence (cont.) Repeating the same steps as in the previous case, we obtain:
25
Next Time Learning BNs and MFs
26
THE END
27
Example: Naïve Bayesian Model A common model in early diagnosis: –Symptoms are conditionally independent given the disease (or fault) Thus, if –X 1,…,X p denote whether the symptoms exhibited by the patient (headache, high- fever, etc.) and –H denotes the hypothesis about the patients health then, P(X 1,…,X p,H) = P(H)P(X 1 |H)…P(X p |H), This naïve Bayesian model allows compact representation –It does embody strong independence assumptions
28
Elimination on Trees Formally, for any tree, there is an elimination ordering with induced width = 1 Thm Inference on trees is linear in number of variables
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.