Discrete Optimization in Computer Vision Nikos Komodakis Ecole des Ponts ParisTech, LIGM Traitement de l’information et vision artificielle.

Slides:



Advertisements
Similar presentations
Mean-Field Theory and Its Applications In Computer Vision1 1.
Advertisements

Primal-dual Algorithm for Convex Markov Random Fields Vladimir Kolmogorov University College London GDR (Optimisation Discrète, Graph Cuts et Analyse d'Images)
Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London Tutorial at GDR (Optimisation Discrète, Graph Cuts.
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Exact Inference. Inference Basic task for inference: – Compute a posterior distribution for some query variables given some observed evidence – Sum out.
Tutorial at ICCV (Barcelona, Spain, November 2011)
Section 3: Appendix BP as an Optimization Algorithm 1.
Exact Inference in Bayes Nets
Junction Trees And Belief Propagation. Junction Trees: Motivation What if we want to compute all marginals, not just one? Doing variable elimination for.
ICCV 2007 tutorial Part III Message-passing algorithms for energy minimization Vladimir Kolmogorov University College London.
Loopy Belief Propagation a summary. What is inference? Given: –Observabled variables Y –Hidden variables X –Some model of P(X,Y) We want to make some.
Introduction to Belief Propagation and its Generalizations. Max Welling Donald Bren School of Information and Computer and Science University of California.
Discrete Optimization for Vision and Learning. Who? How? M. Pawan Kumar Associate Professor Ecole Centrale Paris Nikos Komodakis Associate Professor Ecole.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
1 Fast Primal-Dual Strategies for MRF Optimization (Fast PD) Robot Perception Lab Taha Hamedani Aug 2014.
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
© 2007, Roman Schmidt Distributed Information Systems Laboratory Evergrow workshop, Jerusalem, IsraelFebruary 19, 2007 Efficient implementation of BP in.
Learning to Detect A Salient Object Reporter: 鄭綱 (3/2)
Chapter 7 Dynamic Programming 7.
Global Approximate Inference Eran Segal Weizmann Institute.
Temporal Processes Eran Segal Weizmann Institute.
Conditional Random Fields
Efficiently Solving Convex Relaxations M. Pawan Kumar University of Oxford for MAP Estimation Philip Torr Oxford Brookes University.
Belief Propagation, Junction Trees, and Factor Graphs
Graphical Models Lei Tang. Review of Graphical Models Directed Graph (DAG, Bayesian Network, Belief Network) Typically used to represent causal relationship.
Announcements Readings for today:
Measuring Uncertainty in Graph Cut Solutions Pushmeet Kohli Philip H.S. Torr Department of Computing Oxford Brookes University.
Optimal binary search trees
Computer vision: models, learning and inference
Some Surprises in the Theory of Generalized Belief Propagation Jonathan Yedidia Mitsubishi Electric Research Labs (MERL) Collaborators: Bill Freeman (MIT)
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by B.-H. Kim Biointelligence Laboratory, Seoul National.
CS774. Markov Random Field : Theory and Application Lecture 08 Kyomin Jung KAIST Sep
CS774. Markov Random Field : Theory and Application Lecture 13 Kyomin Jung KAIST Oct
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
Probabilistic Graphical Models
Discrete Optimization Lecture 4 – Part 2 M. Pawan Kumar Slides available online
1 Variable Elimination Graphical Models – Carlos Guestrin Carnegie Mellon University October 11 th, 2006 Readings: K&F: 8.1, 8.2, 8.3,
Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London.
Discrete Optimization in Computer Vision M. Pawan Kumar Slides will be available online
Discrete Optimization Lecture 3 – Part 1 M. Pawan Kumar Slides available online
Fast and accurate energy minimization for static or time-varying Markov Random Fields (MRFs) Nikos Komodakis (Ecole Centrale Paris) Nikos Paragios (Ecole.
Readings: K&F: 11.3, 11.5 Yedidia et al. paper from the class website
Probabilistic Inference Lecture 5 M. Pawan Kumar Slides available online
CS Statistical Machine learning Lecture 24
Introduction to Belief Propagation
Belief Propagation and its Generalizations Shane Oldenburger.
Pattern Recognition and Machine Learning-Chapter 13: Sequential Data
DISTIN: Distributed Inference and Optimization in WSNs A Message-Passing Perspective SCOM Team
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Probabilistic Inference Lecture 2 M. Pawan Kumar Slides available online
Discrete Optimization Lecture 1 M. Pawan Kumar Slides available online
Christopher M. Bishop, Pattern Recognition and Machine Learning 1.
Pattern Recognition and Machine Learning
Efficient Belief Propagation for Image Restoration Qi Zhao Mar.22,2006.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Team LDPC, SoC Lab. CS Dept. National Taiwan University Codes and Decoding on General Graphs Speaker: Yi-hsin Jian.
Machine Learning – Lecture 18
Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Machine Learning – Lecture 13 Exact Inference & Belief Propagation Bastian.
Today Graphical Models Representing conditional dependence graphically
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
Distributed cooperation and coordination using the Max-Sum algorithm
Bayesian Belief Propagation for Image Understanding David Rosenberg.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Introduction of BP & TRW-S
Today.
STEREO MATCHING USING POPULATION-BASED MCMC
Prof. Adriana Kovashka University of Pittsburgh April 4, 2017
Expectation-Maximization & Belief Propagation
Lecture 3: Exact Inference in GMs
Presentation transcript:

Discrete Optimization in Computer Vision Nikos Komodakis Ecole des Ponts ParisTech, LIGM Traitement de l’information et vision artificielle

Message passing algorithms for energy minimization

Message-passing algorithms Central concept: messages These methods work by propagating messages across the MRF graph Widely used algorithms in many areas

Message-passing algorithms But how do messages relate to optimizing the energy? Let’s look at a simple example first: we will examine the case where the MRF graph is a chain

Message-passing on chains MRF graph

Message-passing on chains Corresponding lattice or trellis

Message-passing on chains Global minimum in linear time Optimization proceeds in two passes: Forward pass (dynamic programming) Backward pass

Message-passing on chains (example on board) (algebraic derivation of messages)

s qpr Message-passing on chains

s qpr Forward pass (dynamic programming)

s qpr

s qpr

s qpr

s qpr

s qpr Min-marginal for node s and label j:

s qpr Backward pass xsxs xrxr xqxq xpxp

Message-passing on chains How can I compute min-marginals for any node in the chain? How to compute min-marginals for all nodes efficiently? What is the running time of message-passing on chains?

Message-passing on trees We can apply the same idea to tree- structured graphs Slight generalization from chains Resulting algorithm called: belief propagation (also called under many other names: e.g., max-product, min-sum etc.) (for chains, it is also often called the Viterbi algorithm)

Belief propagation (BP)

Dynamic programming: global minimum in linear time BP:  Inward pass (dynamic programming)  Outward pass  Gives min-marginals qpr BP on a tree [Pearl’88] root leaf

qpr Inward pass (dynamic programming)

qpr

qpr

qpr

qpr

qpr

qpr

qpr Outward pass

qpr BP on a tree: min-marginals Min-marginal for node q and label j:

Belief propagation: message-passing on trees

min-marginals = ???min-marginals = sum of all messages + unary potential

What is the running time of message- passing for trees?

Message-passing on chains Essentially, message passing on chains is dynamic programming Dynamic programming means reuse of computations

Generalizing belief propagation Key property: min(a+b,a+c) = a+min(b,c) BP can be generalized to any operators satisfying the above property E.g., instead of (min,+), we could have:  (max,*) Resulting algorithm called max-product. What does it compute?  (+,*) Resulting algorithm called sum-product. What does it compute?

Belief propagation as a distributive algorithm BP works distributively (as a result, it can be parallelized) Essentially BP is a decentralized algorithm Global results through local exchange of information Simple example to illustrate this: counting soldiers

Counting soldiers in a line Can you think of a distributive algorithm for the commander to count its soldiers? (From David MacKay’s book “Information Theory, Inference, and Learning”)

Counting soldiers in a line

Counting soldiers in a tree Can we do the same for this case?

Counting soldiers in a tree

Counting soldiers Simple example to illustrate BP Same idea can be used in cases which are seemingly more complex:  counting paths through a point in a grid  probability of passing through a node in the grid In general, we have used the same idea for minimizing MRFs (a much more general problem)

Graphs with loops How about counting these soldiers? Hmmm…overcounting?