Introduction of BP & TRW-S

Slides:



Advertisements
Similar presentations
MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part I.
Advertisements

Primal-dual Algorithm for Convex Markov Random Fields Vladimir Kolmogorov University College London GDR (Optimisation Discrète, Graph Cuts et Analyse d'Images)
Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London Tutorial at GDR (Optimisation Discrète, Graph Cuts.
Exact Inference. Inference Basic task for inference: – Compute a posterior distribution for some query variables given some observed evidence – Sum out.
NEURAL NETWORKS Backpropagation Algorithm
ROTAMER OPTIMIZATION FOR PROTEIN DESIGN THROUGH MAP ESTIMATION AND PROBLEM-SIZE REDUCTION Hong, Lippow, Tidor, Lozano-Perez. JCC Presented by Kyle.
Junction Trees And Belief Propagation. Junction Trees: Motivation What if we want to compute all marginals, not just one? Doing variable elimination for.
ICCV 2007 tutorial Part III Message-passing algorithms for energy minimization Vladimir Kolmogorov University College London.
Discrete Optimization in Computer Vision Nikos Komodakis Ecole des Ponts ParisTech, LIGM Traitement de l’information et vision artificielle.
Convergent Message-Passing Algorithms for Inference over General Graphs with Convex Free Energies Tamir Hazan, Amnon Shashua School of Computer Science.
Introduction to Belief Propagation and its Generalizations. Max Welling Donald Bren School of Information and Computer and Science University of California.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Lectures on Network Flows
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Learning to Detect A Salient Object Reporter: 鄭綱 (3/2)
Distributed Message Passing for Large Scale Graphical Models Alexander Schwing Tamir Hazan Marc Pollefeys Raquel Urtasun CVPR2011.
Carnegie Mellon Focused Belief Propagation for Query-Specific Inference Anton Chechetka Carlos Guestrin 14 May 2010.
1 Image Completion using Global Optimization Presented by Tingfan Wu.
Convergent and Correct Message Passing Algorithms Nicholas Ruozzi and Sekhar Tatikonda Yale University TexPoint fonts used in EMF. Read the TexPoint manual.
Global Approximate Inference Eran Segal Weizmann Institute.
Efficiently Solving Convex Relaxations M. Pawan Kumar University of Oxford for MAP Estimation Philip Torr Oxford Brookes University.
Belief Propagation, Junction Trees, and Factor Graphs
Graphical Models Lei Tang. Review of Graphical Models Directed Graph (DAG, Bayesian Network, Belief Network) Typically used to represent causal relationship.
MAP estimation in MRFs via rank aggregation Rahul Gupta Sunita Sarawagi (IBM India Research Lab) (IIT Bombay)
Measuring Uncertainty in Graph Cut Solutions Pushmeet Kohli Philip H.S. Torr Department of Computing Oxford Brookes University.
A Trainable Graph Combination Scheme for Belief Propagation Kai Ju Liu New York University.
MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part I.
Latent Boosting for Action Recognition Zhi Feng Huang et al. BMVC Jeany Son.
Probabilistic Inference Lecture 4 – Part 2 M. Pawan Kumar Slides available online
Computer vision: models, learning and inference
CS774. Markov Random Field : Theory and Application Lecture 08 Kyomin Jung KAIST Sep
Planar Cycle Covering Graphs for inference in MRFS The Typhon Algorithm A New Variational Approach to Ground State Computation in Binary Planar Markov.
Exact methods for ALB ALB problem can be considered as a shortest path problem The complete graph need not be developed since one can stop as soon as in.
Discrete Optimization Lecture 4 – Part 2 M. Pawan Kumar Slides available online
Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London.
Discrete Optimization in Computer Vision M. Pawan Kumar Slides will be available online
Discrete Optimization Lecture 3 – Part 1 M. Pawan Kumar Slides available online
Readings: K&F: 11.3, 11.5 Yedidia et al. paper from the class website
Probabilistic Inference Lecture 5 M. Pawan Kumar Slides available online
Dynamic Tree Block Coordinate Ascent Daniel Tarlow 1, Dhruv Batra 2 Pushmeet Kohli 3, Vladimir Kolmogorov 4 1: University of Toronto3: Microsoft Research.
Daphne Koller Message Passing Belief Propagation Algorithm Probabilistic Graphical Models Inference.
Update any set S of nodes simultaneously with step-size We show fixed point update is monotone for · 1/|S| Covering Trees and Lower-bounds on Quadratic.
Efficient Discriminative Learning of Parts-based Models M. Pawan Kumar Andrew Zisserman Philip Torr
CS Statistical Machine learning Lecture 24
1 Mean Field and Variational Methods finishing off Graphical Models – Carlos Guestrin Carnegie Mellon University November 5 th, 2008 Readings: K&F:
Inference for Learning Belief Propagation. So far... Exact methods for submodular energies Approximations for non-submodular energies Move-making ( N_Variables.
Probabilistic Inference Lecture 2 M. Pawan Kumar Slides available online
Discrete Optimization Lecture 1 M. Pawan Kumar Slides available online
Pattern Recognition and Machine Learning
Efficient Belief Propagation for Image Restoration Qi Zhao Mar.22,2006.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Error-Correcting Code
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
Distributed cooperation and coordination using the Max-Sum algorithm
ベーテ自由エネルギーに対するCCCPアルゴリズムの拡張
Lecture 3: Uninformed Search
The minimum cost flow problem
STEREO MATCHING USING POPULATION-BASED MCMC
CSCI 5822 Probabilistic Models of Human and Machine Learning
CSE 589 Applied Algorithms Spring 1999
Compute convex lower bounding function and optimize it instead!
≠ Particle-based Variational Inference for Continuous Systems
Expectation-Maximization & Belief Propagation
MAP Estimation of Semi-Metric MRFs via Hierarchical Graph Cuts
Lecture 3: Exact Inference in GMs
Readings: K&F: 11.3, 11.5 Yedidia et al. paper from the class website
BP in Practice Message Passing Inference Probabilistic Graphical
Mean Field and Variational Methods Loopy Belief Propagation
Presentation transcript:

Introduction of BP & TRW-S Miss Bui Dang Ha Phuong bdhphuong@cit.ctu.edu.vn In NCCA, BU 31/03/2017

Belief propagation (BP)

BP on a tree [Pearl’88] leaf root leaf BP: q r leaf root BP: Inward pass (dynamic programming) Outward pass Gives min-marginals

Inward pass (dynamic programming) q r

Inward pass (dynamic programming) q r

Inward pass (dynamic programming) q r

Inward pass (dynamic programming) q r

Outward pass p q r

BP on a tree: min-marginals q r Min-marginal for node q and label j:

BP in a general graph Pass messages using same rules May not converge Empirically often works quite well May not converge “Pseudo” min-marginals Gives local minimum in the “tree neighborhood” [Weiss&Freeman’01],[Wainwright et al.’04] Assumptions: BP has converged no ties in pseudo min-marginals

Reparameterization

Energy function - visualization

Reparameterization Definition. is a reparameterization of if they define the same energy: Maxflow, BP and TRW perform reparameterisations

BP as reparameterization [Wainwright et al. 04] Messages define reparameterization: min-marginals (for trees) BP on a tree: reparameterize energy so that unary potentials become min-marginal Every iteration provides a reparameterization to obtain min-marginals

Belief Propagation on Trees Va Vb Vc Vd Ve Vg Vh Forward Pass: Leaf  Root Backward Pass: Root  Leaf All min-marginals are computed

Tree-reweighted message passing (TRW)

TRW algorithms Two reparameterization operations: Ordinary BP on trees Node averaging

TRW algorithms Two reparameterization operations: Ordinary BP on trees Node averaging 4 1

TRW algorithms Two reparameterization operations: Ordinary BP on trees Node averaging 2 2 0.5 0.5

TRW Message Passing Choose random node Va Reparameterize to obtain min-marginals of Va Compute node-average of min-marginal of Va REPEAT Can also do edge-averaging

TRW-S algorithm [Kolmogorov’05]: specific sequential schedule (TRW-S) Lower bound does not decrease, convergence guarantees Needs half the memory

TRW-S algorithm Pick node p Run BP on all trees containing p “Average” node p

TRW-S algorithm Particular order of averaging and BP operations Lower bound guaranteed not to decrease There exists limit point that satisfies weak tree agreement condition Efficiency

Convex optimization Objective Function where where b is sigma and τ is lamda, which are global parameters. Pairwise Unary potential

Convex optimization We use TRWS or BP for optimization. The differences between TRWS and BP are TRWS computes lower bound during the backward pass. When checking the stopping criterion, TRWS will check the convergence of lower bound, the guarantees the lower bound be not to decrease  convergence guarantees. After applying TRWS or BP for optimization, we obtain the mapping l.

Comparison between BP and TRW-S Exact on trees - be applied only to tree-structured subgraphs. Passing messages around the graph in parallel. two-pass algorithm: forward pass and backward pass. Drawbacks: First, it usually finds a solution with higher energy than TRW-S. Second, BP does not always converge - it often goes into a loop. TRW-S TRW-S updated messages in a sequential order. Running BP on all trees containing node p. TRW-S obtains a solution with lower energy than BP. Requiring half as much memory compared to BP. Lower bound is guaranteed not to decrease. Reusing previously passed messages. Limit point - weak tree agreement. Efficient with monotonic chains.