A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Yinyin Yuan and Chang-Tsun Li Computer Science Department
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Agenda Introduction Bag-of-words models Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Conditional Random Fields - A probabilistic graphical model Stefan Mutter Machine Learning Group Conditional Random Fields - A probabilistic graphical.
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Probabilistic Inference Lecture 1
Markov random field Institute of Electronics, NCTU
Learning to Detect A Salient Object Reporter: 鄭綱 (3/2)
J. Mike McHugh,Janusz Konrad, Venkatesh Saligrama and Pierre-Marc Jodoin Signal Processing Letters, IEEE Professor: Jar-Ferr Yang Presenter: Ming-Hua Tang.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Yilin Wang 11/5/2009. Background Labeling Problem Labeling: Observed data set (X) Label set (L) Inferring the labels of the data points Most vision problems.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
1 On the Statistical Analysis of Dirty Pictures Julian Besag.
Background Estimation with Gaussian Distribution for Image Segmentation, a fast approach Gianluca Bailo, Massimo Bariani, Paivi Ijas, Marco Raggio IEEE.
1 Markov random field: A brief introduction Tzu-Cheng Jen Institute of Electronics, NCTU
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Today Introduction to MCMC Particle filters and MCMC
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
An Iterative Optimization Approach for Unified Image Segmentation and Matting Hello everyone, my name is Jue Wang, I’m glad to be here to present our paper.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 11, NOVEMBER 2011 Qian Zhang, King Ngi Ngan Department of Electronic Engineering, the Chinese university.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Real-Time Decentralized Articulated Motion Analysis and Object Tracking From Videos Wei Qu, Member, IEEE, and Dan Schonfeld, Senior Member, IEEE.
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
Correlated Topic Models By Blei and Lafferty (NIPS 2005) Presented by Chunping Wang ECE, Duke University August 4 th, 2006.
Graphical models for part of speech tagging
Learning and Recognizing Human Dynamics in Video Sequences Christoph Bregler Alvina Goh Reading group: 07/06/06.
CSC2535: Computation in Neural Networks Lecture 11: Conditional Random Fields Geoffrey Hinton.
Interactive Shortest Path Part 3 An Image Segmentation Technique Jonathan-Lee Jones.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Markov Random Fields Probabilistic Models for Images
Kevin Cherry Robert Firth Manohar Karki. Accurate detection of moving objects within scenes with dynamic background, in scenarios where the camera is.
Maximum Entropy (ME) Maximum Entropy Markov Model (MEMM) Conditional Random Field (CRF)
The Dirichlet Labeling Process for Functional Data Analysis XuanLong Nguyen & Alan E. Gelfand Duke University Machine Learning Group Presented by Lu Ren.
14 October, 2010LRI Seminar 2010 (Univ. Paris-Sud)1 Statistical performance analysis by loopy belief propagation in probabilistic image processing Kazuyuki.
Presented by Jian-Shiun Tzeng 5/7/2009 Conditional Random Fields: An Introduction Hanna M. Wallach University of Pennsylvania CIS Technical Report MS-CIS
Expectation-Maximization (EM) Case Studies
HMM - Part 2 The EM algorithm Continuous density HMM.
CS Statistical Machine learning Lecture 24
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
Topic Models Presented by Iulian Pruteanu Friday, July 28 th, 2006.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
Real-Time Tracking with Mean Shift Presented by: Qiuhua Liu May 6, 2005.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
John Lafferty Andrew McCallum Fernando Pereira
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Markov Random Fields & Conditional Random Fields
Bayesian Speech Synthesis Framework Integrating Training and Synthesis Processes Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda Nagoya Institute.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Edge Preserving Spatially Varying Mixtures for Image Segmentation Giorgos Sfikas, Christophoros Nikou, Nikolaos Galatsanos (CVPR 2008) Presented by Lihan.
Gaussian Conditional Random Field Network for Semantic Segmentation
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Video object segmentation and its salient motion detection using adaptive background generation Kim, T.K.; Im, J.H.; Paik, J.K.;  Electronics Letters 
Biointelligence Laboratory, Seoul National University
Particle Filtering for Geometric Active Contours
Nonparametric Semantic Segmentation
A Block Based MAP Segmentation for Image Compression
Clustering (2) & EM algorithm
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March 23, 2006 Paper by Yang Wang and Qiang Ji, Rensselaer Polytechnic Institute, CVPR 2005

Outline Markov Random Field (MRF) Conditional Random Field (CRF) The Spatial – temporal CRF Model for Video Sequence Segmentation Results & Conclusions

Markov Random Field Random Field: Let be a family of random variables defined on the set S, in which each random variable takes a value in a label set L. The family F is called a random field. Markov Random Field: F is said to be a Markov random field on S with respect to a neighborhood system N if and only if the following two conditions are satisfied:

Conditional Random Field Figure 1: Graphical structure of a chain-structured CRFs for sequences. The variables corresponding to unshaded nodes are not generated by the model. Conditional Random Field: a Markov random field (Y) globally conditioned on another random field (X).

Posterior Probability in CRF Lafferty et al. [2] define the the probability of a particular label sequence y given observation sequence x to be a normalized product of potential functions: where is a transition feature function of the entire observation sequence and the labels at positions i and i−1 in the label sequence; is a state feature function of the label at position i and the observation sequence;

The Spatial–temporal CRF for Video Segmentation Observation : intensity and motion information at point x. Segmentation Label : one of the L interpedently moving objects composing the scene. The conditional random field: The segmentation labels obey the Markov Property given the observed data: Where is the sequence of the observed data up to time k.

The Posterior Probability, the State Transition Probability and the Observation Likelihood The posterior probability of the segmentation field is given by:  The one-pixel potential reflects the information from the observed data for a single position;  The two-pixel potential imposes the spatial interaction (or pairwise constraint).

The State Transition Probability The temporal dependence between consecutive segmentation fields are modeled by the state transition probability:  The one-pixel potential models the label state transition for a single site;  The two-pixel potential imposes the spatial connectivity constraint to form contiguous regions.

The State Transition Probability(cont.) To encourage a point to have the same segmentation label as those of its temporal neighborhood, the one-pixel potential is further expressed as: Mx, Temporal Neighborhood; Nx, Spatial Neighborhood. Figure 2.The 5-pixel temporal neighborhood and the 4-pixel spatial neighborhood.

The State Transition Probability(cont.) Where the smoothness constraints is imposed by: Spatial connectivity: Temporal continuity:

The Observation Likelihood The observation model p(z k |s k ) is also formulated by a conditional random field: The observation data is represented as z k = {g k, m k }, intensity and motion information for the kth video frame. And they are independent.

The Observation Likelihood (cont.) For the intensity likelihood, the one-pixel potential is set as: where the probability density is modeled as a Gaussian mixture, whose parameters are estimated from segmented images at time k-1 via EM:

The Observation Likelihood (cont.) For the intensity likelihood, the two-pixel potential is defined as non-zero only when two pixels belong to different objects: For the motion likelihood, the one-pixel potential: where the density of motion is defined as one Gaussian:

The Observation Likelihood (cont.) For the motion likelihood, the two-pixel potential imposes smoothness of motion in one object. By adding the two-pixel potentials, the conditional independence assumption for the observed data such as in HMM and MRF are not required….

The S-T CRF Filter Given the potentials of the distribution at time k, the posterior at time k+1 is recursively updated as: Which can be efficiently approximated by a CRF with the potentials in eq. (5a) and (5b) from Appendix A.

Initialization and Optimization Initialization Segmentation Field: The initial frame is divided into small blocks and the number of objects is determined by clustering the motion of each block - Each object with one motion model. Optimization: got from an iterative procedure.

Results and Conclusions 24-pixel spatial neighborhood 15-pixel temporal neighborhood C Program Speed: Two 320*240 frames per second on Pentium G PC.

Mother-Daughter Sequence Sofa behind Mother and Daughter’s Shoulder are more accurate.

Coastguard Sequence The ship’s bottom and the boat’s trail are more accurate.

References [1] Wallach, H.M.: Conditional random fields: An introduction. Technical Report MS-CIS-04-21, University of Pennsylvania (2004) [2] J. Lafferty, A. McCallum, and F. Pereira. “Conditional random fields: Probabilistic models for segmenting and labeling sequence data.” Proc. Int’l Conf. Machine Learning, pp , [3] Wang, Y., Loe, K.F., Wu, J.K.: A dynamic conditional random field model for foreground and shadow segmentation. IEEE Trans Pattern Anal Mach Intell. 28(2): (2006)