Today
Next week
Marginalization ) , ( y x P P(x1) = sum sum ) , ( y x P Suppose you have some joint probability, Involving observations, y, and hidden states, x. Suppose you’re at x1, and you want to find the marginal probability there, given the observations. Normally, you would have to compute: For N other hidden nodes, each of M states, that will take MN additions. ) , ( 3 2 1 y x P P(x1) = sum sum ) , ( 3 2 1 y x P x x 2 3
Special case: Markov network But suppose the joint probability has a special structure, shown by this Markov network: Then this sum: can be computed with N M2 additions, as follows… y1 x1 y2 x2 y3 x3 ) , ( sum 3 2 1 y x P P(x1) =
Derivation of belief propagation y1 y2 y3 x1 x2 x3 P(x1) = sum sum P ( x , x , x , y , y , y ) 1 2 3 1 2 3 x x 2 3
The posterior factorizes P(x1) = sum sum P ( x , x , x , y , y , y ) 1 2 3 1 2 3 x x 2 3 = sum sum F ( x , y ) 1 1 x x 2 3 F ( x , y ) Y ( x , x ) 2 2 1 2 F ( x , y ) Y ( x , x ) 3 3 2 3 x = mean F ( x , y ) 1 MMSE 1 1 x y1 y2 y3 1 sum F ( x , y ) Y ( x , x ) 2 2 1 2 x 2 x1 x2 x3 sum F ( x , y ) Y ( x , x ) 3 3 2 3 x 3
Propagation rules P(x1) = sum sum P ( x , x , x , y , y , y ) = sum 2 3 1 2 3 x x 2 3 = sum sum F ( x , y ) 1 1 x x 2 3 F ( x , y ) Y ( x , x ) 2 2 1 2 F ( x , y ) Y ( x , x ) 3 3 2 3 P(x1) = F ( x , y ) 1 1 y1 y2 y3 sum F ( x , y ) Y ( x , x ) 2 2 1 2 x 2 x1 x2 x3 sum F ( x , y ) Y ( x , x ) 3 3 2 3 x 3
Propagation rules P(x1) = F ( x , y ) sum F ( x , y ) Y ( x , x ) sum 2 2 1 2 x 2 sum F ( x , y ) Y ( x , x ) 3 3 2 3 x 3 y1 y2 y3 x1 x2 x3
Belief and message update rules j = i j i
Belief propagation updates = i j i ( ) = * .* .*
Simple example For the 3-node example, worked out in detail, see Sections 2.0, 2.1 of:
Optimal solution in a chain or tree: Belief Propagation “Do the right thing” Bayesian algorithm. For Gaussian random variables over time: Kalman filter. For hidden Markov models: forward/backward algorithm (and MAP variant is Viterbi).
Other loss functions The above rules let you compute the marginal probability at a node. From that, you can compute the mean estimate. But you can also use a related algorithm to compute the MAP estimate for x1.
MAP estimate for a chain or a tree y1 y2 y3 x1 x2 x3
The posterior factorizes y1 y2 y3 x1 x2 x3
Propagation rules y1 y2 y3 x1 x2 x3
Using conditional probabilities instead of compatibility functions x1 y2 x2 y3 x3 By Bayes rule
Writing it as a factorization y1 x1 y2 x2 y3 x3 By the fact that conditioning on x1 makes y1 and x2, x3, y2, y3 independent
Writing it as a factorization y1 x1 y2 x2 y3 x3 Now use Bayes rule (with x2) for the rightmost term.
Writing it as a factorization y1 x1 y2 x2 y3 x3 From the Markov structure, conditioning on x1 and x2 is the same as conditioning on x2.
Writing it as a factorization y1 x1 y2 x2 y3 x3 Conditioning on x2 makes y2 independent of x3 and y3.
Writing it as a factorization y1 x1 y2 x2 y3 x3 The same operations, once more, with the far right term.
A toy problem 10 nodes. 2 states for each node. Local evidence as shown below.
Classic 1976 paper
Relaxation labelling
Belief propagation Relaxation labelling
Yair’s motion example
Yair’s figure/ground example