Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction of BP & TRW-S

Similar presentations


Presentation on theme: "Introduction of BP & TRW-S"— Presentation transcript:

1 Introduction of BP & TRW-S
Miss Bui Dang Ha Phuong In NCCA, BU 31/03/2017

2 Belief propagation (BP)

3 BP on a tree [Pearl’88] leaf root leaf BP:
q r leaf root BP: Inward pass (dynamic programming) Outward pass Gives min-marginals

4 Inward pass (dynamic programming)
q r

5 Inward pass (dynamic programming)
q r

6 Inward pass (dynamic programming)
q r

7 Inward pass (dynamic programming)
q r

8 Outward pass p q r

9 BP on a tree: min-marginals
q r Min-marginal for node q and label j:

10 BP in a general graph Pass messages using same rules May not converge
Empirically often works quite well May not converge “Pseudo” min-marginals Gives local minimum in the “tree neighborhood” [Weiss&Freeman’01],[Wainwright et al.’04] Assumptions: BP has converged no ties in pseudo min-marginals

11 Reparameterization

12 Energy function - visualization

13 Reparameterization Definition. is a reparameterization of
if they define the same energy: Maxflow, BP and TRW perform reparameterisations

14 BP as reparameterization [Wainwright et al. 04]
Messages define reparameterization: min-marginals (for trees) BP on a tree: reparameterize energy so that unary potentials become min-marginal Every iteration provides a reparameterization to obtain min-marginals

15 Belief Propagation on Trees
Va Vb Vc Vd Ve Vg Vh Forward Pass: Leaf  Root Backward Pass: Root  Leaf All min-marginals are computed

16 Tree-reweighted message passing (TRW)

17 TRW algorithms Two reparameterization operations: Ordinary BP on trees
Node averaging

18 TRW algorithms Two reparameterization operations: Ordinary BP on trees
Node averaging 4 1

19 TRW algorithms Two reparameterization operations: Ordinary BP on trees
Node averaging 2 2 0.5 0.5

20 TRW Message Passing Choose random node Va
Reparameterize to obtain min-marginals of Va Compute node-average of min-marginal of Va REPEAT Can also do edge-averaging

21 TRW-S algorithm [Kolmogorov’05]: specific sequential schedule (TRW-S)
Lower bound does not decrease, convergence guarantees Needs half the memory

22 TRW-S algorithm Pick node p Run BP on all trees containing p
“Average” node p

23 TRW-S algorithm Particular order of averaging and BP operations
Lower bound guaranteed not to decrease There exists limit point that satisfies weak tree agreement condition Efficiency

24 Convex optimization Objective Function where
where b is sigma and τ is lamda, which are global parameters. Pairwise Unary potential

25 Convex optimization We use TRWS or BP for optimization.
The differences between TRWS and BP are TRWS computes lower bound during the backward pass. When checking the stopping criterion, TRWS will check the convergence of lower bound, the guarantees the lower bound be not to decrease  convergence guarantees. After applying TRWS or BP for optimization, we obtain the mapping l.

26 Comparison between BP and TRW-S
Exact on trees - be applied only to tree-structured subgraphs. Passing messages around the graph in parallel. two-pass algorithm: forward pass and backward pass. Drawbacks: First, it usually finds a solution with higher energy than TRW-S. Second, BP does not always converge - it often goes into a loop. TRW-S TRW-S updated messages in a sequential order. Running BP on all trees containing node p. TRW-S obtains a solution with lower energy than BP. Requiring half as much memory compared to BP. Lower bound is guaranteed not to decrease. Reusing previously passed messages. Limit point - weak tree agreement. Efficient with monotonic chains.


Download ppt "Introduction of BP & TRW-S"

Similar presentations


Ads by Google