The EM Algorithm With Applications To Image Epitome

Slides:



Advertisements
Similar presentations
Real-time on-line learning of transformed hidden Markov models Nemanja Petrovic, Nebojsa Jojic, Brendan Frey and Thomas Huang Microsoft, University of.
Advertisements

Part 2: Unsupervised Learning
Bayesian Belief Propagation
2010 Winter School on Machine Learning and Vision Sponsored by Canadian Institute for Advanced Research and Microsoft Research India With additional support.
Topic models Source: Topic models, David Blei, MLSS 09.
University of Toronto Oct. 18, 2004 Modelling Motion Patterns with Video Epitomes Machine Learning Group Meeting University of Toronto Oct. 18, 2004 Vincent.
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Variational Inference and Variational Message Passing
Segmentation CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Lecture 5: Learning models using EM
Problem Sets Problem Set 3 –Distributed Tuesday, 3/18. –Due Thursday, 4/3 Problem Set 4 –Distributed Tuesday, 4/1 –Due Tuesday, 4/15. Probably a total.
Clustering Color/Intensity
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Expectation Maximization for GMM Comp344 Tutorial Kai Zhang.
Expectation-Maximization
What is it? When would you use it? Why does it work? How do you implement it? Where does it stand in relation to other methods? EM algorithm reading group.
EM Algorithm Likelihood, Mixture Models and Clustering.
Clustering & Dimensionality Reduction 273A Intro Machine Learning.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
Segmental Hidden Markov Models with Random Effects for Waveform Modeling Author: Seyoung Kim & Padhraic Smyth Presentor: Lu Ren.
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
University of Toronto Aug. 11, 2004 Learning the “Epitome” of a Video Sequence Information Processing Workshop 2004 Vincent Cheung Probabilistic and Statistical.
28 February, 2003University of Glasgow1 Cluster Variation Method and Probabilistic Image Processing -- Loopy Belief Propagation -- Kazuyuki Tanaka Graduate.
Lecture 6 Spring 2010 Dr. Jianjun Hu CSCE883 Machine Learning.
CS Statistical Machine learning Lecture 24
Lecture 2: Statistical learning primer for biologists
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
Learning Jigsaws for clustering appearance and shape John Winn, Anitha Kannan and Carsten Rother NIPS 2006.
Jigsaws: joint appearance and shape clustering John Winn with Anitha Kannan and Carsten Rother Microsoft Research, Cambridge.
Information Bottleneck versus Maximum Likelihood Felix Polyakov.
Tightening LP Relaxations for MAP using Message-Passing David Sontag Joint work with Talya Meltzer, Amir Globerson, Tommi Jaakkola, and Yair Weiss.
Epitome Ji Soo Yi and Woo Young Kim Instructor: Prof. James Rehg April 27, Spring 2004, CS7636 Computational Perception.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Visual and auditory scene analysis using graphical models Nebojsa Jojic
Bayesian Belief Propagation for Image Understanding David Rosenberg.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Computer vision: models, learning and inference
Learning Deep Generative Models by Ruslan Salakhutdinov
Lecture 18 Expectation Maximization
Classification of unlabeled data:
Computer vision: models, learning and inference
LECTURE 10: EXPECTATION MAXIMIZATION (EM)
LOCUS: Learning Object Classes with Unsupervised Segmentation
CS 2750: Machine Learning Expectation Maximization
Latent Variables, Mixture Models and EM
Probabilistic Models for Linear Regression
Expectation-Maximization
Learning to Combine Bottom-Up and Top-Down Segmentation
CSSE463: Image Recognition Day 23
Bayesian Models in Machine Learning
Probabilistic Models with Latent Variables
Learning Layered Motion Segmentations of Video
Transformation-invariant clustering using the EM algorithm
Expectation Maximization
Stochastic Optimization Maximization for Latent Variable Models
Markov Random Fields Presented by: Vladan Radosavljevic.
10701 Recitation Pengtao Xie
Announcements Project 4 questions Evaluations.
Expectation-Maximization & Belief Propagation
Patch-Based Image Classification Using Image Epitomes
CSSE463: Image Recognition Day 23
Lecture 11 Generalizations of EM.
CSSE463: Image Recognition Day 23
Clustering (2) & EM algorithm
Presentation transcript:

The EM Algorithm With Applications To Image Epitome Kiera Henning, Jiwon Kim,

EM Problem Given data U determine the best parameters Q without knowing the values of the hidden variables J. Example: color segmentation (U: color at each pixel, theta: mean color of each cluster, J: cluster assignment for each pixel)

EM Algorithm Maximize the posterior probability of the parameters Q given U, marginalizing over J. To do this, alternate between estimating J (the E-step) and Q (the M-step). More precisely, E step: estimate “distribution over” J. Alternatively, E step: compute tight lower bound to posterior; M step: maximize bound w.r.t. theta.

Using the EM Algorithm Identify U, Q, J and derive update equations

Using the EM Algorithm Identify U, Q, J and derive update equations Convergence guaranteed, but slow

Using the EM Algorithm Identify U, Q, J and derive update equations Convergence guaranteed, but slow: variational EM Instead of directly estimating distribution over J in the E step, use parameterized form for the distribution with a set of variational parameters. This reduces the number of computation steps involved in the update and speeds up the algorithm. However, may lose quality of solution (bound decreased, not maximized). Also up to the designer to choose parameterization.

Using the EM Algorithm Identify U, Q, J and derive update equations Convergence guaranteed, but slow: variational EM Each implementation is application specific No general code that can be plugged in, like graph cuts.

Image Epitome “The epitome of an image is its miniature, condensed version containing the essence of the textural and shape properties of the image.”

Applying EM Data: Parameters to be estimated: Hidden variables: set of patches Parameters to be estimated: epitome Hidden variables: mappings from epitome to image

Examples High initial noise 1000 patches 4000 patches After 10 iterations Low initial noise 1000 patches 4000 patches EM iteration

Examples 4000 Patches

Examples 4000 Patches (high initial noise) 4000 Patches (low initial noise) Reconstructed image

Epitome Extensions Suggestions Courtesy of Nebojsa Jojjic. Video Epitome Motion Epitome Video Epitome: image epitome extended to three dimensions Motion Epitome: articulated object trajectories, as sequences of joint angles

Video Epitome Idea: Extend the concept of an Image Epitome to a Video. The epitome of a video would then be a much smaller video containing the textural, shape, and motion properties of the video.

Motion Epitome Idea: Apply the concept of image epitomes to motion data. The motion epitome would then be significantly smaller than the motion data while still containing most of the motion information. The epitome of motion data is a significantly smaller amount of data which captures the essence of the motion.

Motion Epitome

References Advances in Algorithm for Inference and Learning in Complex Probability Models, Brendan J. Frey and Nebojsa Jojic Epitomic Analysis of Appearance and Shape, Nebojsa Jojic, Brendan J. Frey, and Anitha Kannan Epitome Representation of Images and Notes on Epitome-based Silhouette Generation, Shinko Y. Cheng The Expectation Maximization Algorithm, Frank Dellaert Motion Segmentation using EM – a short Tutorial, Yair Weiss