Transformation-invariant clustering using the EM algorithm

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Real-time on-line learning of transformed hidden Markov models Nemanja Petrovic, Nebojsa Jojic, Brendan Frey and Thomas Huang Microsoft, University of.
Part 2: Unsupervised Learning
Bayesian Belief Propagation
Topic models Source: Topic models, David Blei, MLSS 09.
Learning deformable models Yali Amit, University of Chicago Alain Trouvé, CMLA Cachan.
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Gaussian Mixture.
Feature Selection as Relevant Information Encoding Naftali Tishby School of Computer Science and Engineering The Hebrew University, Jerusalem, Israel NIPS.
Computer vision: models, learning and inference Chapter 18 Models for style and identity.
Reducing Drift in Parametric Motion Tracking
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Joint Estimation of Image Clusters and Image Transformations Brendan J. Frey Computer Science, University of Waterloo, Canada Beckman Institute and ECE,
Transformed Component Analysis: Joint Estimation of Image Components and Transformations Brendan J. Frey Computer Science, University of Waterloo, Canada.
Variational Inference and Variational Message Passing
EE 290A: Generalized Principal Component Analysis Lecture 6: Iterative Methods for Mixture-Model Segmentation Sastry & Yang © Spring, 2011EE 290A, University.
Pattern Recognition and Machine Learning
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Recovering Articulated Object Models from 3D Range Data Dragomir Anguelov Daphne Koller Hoi-Cheung Pang Praveen Srinivasan Sebastian Thrun Computer Science.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
EE462 MLCV 1 Lecture 3-4 Clustering (1hr) Gaussian Mixture and EM (1hr) Tae-Kyun Kim.
Helsinki University of Technology Adaptive Informatics Research Centre Finland Variational Bayesian Approach for Nonlinear Identification and Control Matti.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Similarity measuress Laboratory of Image Analysis for Computer Vision and Multimedia Università di Modena e Reggio Emilia,
EM and expected complete log-likelihood Mixture of Experts
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Functions of Two Random.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
University of Toronto Aug. 11, 2004 Learning the “Epitome” of a Video Sequence Information Processing Workshop 2004 Vincent Cheung Probabilistic and Statistical.
Discovering Deformable Motifs in Time Series Data Jin Chen CSE Fall 1.
A Sparse Non-Parametric Approach for Single Channel Separation of Known Sounds Paris Smaragdis, Madhusudana Shashanka, Bhiksha Raj NIPS 2009.
14 October, 2010LRI Seminar 2010 (Univ. Paris-Sud)1 Statistical performance analysis by loopy belief propagation in probabilistic image processing Kazuyuki.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
Generative Models for Image Understanding Nebojsa Jojic and Thomas Huang Beckman Institute and ECE Dept. University of Illinois.
CS Statistical Machine learning Lecture 24
Lecture 2: Statistical learning primer for biologists
Latent Dirichlet Allocation
Learning Jigsaws for clustering appearance and shape John Winn, Anitha Kannan and Carsten Rother NIPS 2006.
Dimensionality Reduction in Unsupervised Learning of Conditional Gaussian Networks Authors: Pegna, J.M., Lozano, J.A., Larragnaga, P., and Inza, I. In.
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Visual and auditory scene analysis using graphical models Nebojsa Jojic
Introduction to Machine Learning Nir Ailon Lecture 12: EM, Clustering and More.
Paper discussion: Why does cheap and deep learning work so well?
Biointelligence Laboratory, Seoul National University
Learning Deep Generative Models by Ruslan Salakhutdinov
Ch 12. Continuous Latent Variables ~ 12
Automatic Lung Cancer Diagnosis from CT Scans (Week 2)
ICS 280 Learning in Graphical Models
Segmentation of Dynamic Scenes
LOCUS: Learning Object Classes with Unsupervised Segmentation
PixelGAN Autoencoders
Course Outline MODEL INFORMATION COMPLETE INCOMPLETE
LTI Student Research Symposium 2004 Antoine Raux
Bayesian Models in Machine Learning
Probabilistic Models with Latent Variables
Modelling data static data modelling.
Xaq Pitkow, Dora E. Angelaki  Neuron 
Wavelet-Based Denoising Using Hidden Markov Models
Expectation-Maximization & Belief Propagation
Probabilistic image processing and Bayesian network
Independent Factor Analysis
Neural networks (3) Regularization Autoencoder
Biointelligence Laboratory, Seoul National University
The EM Algorithm With Applications To Image Epitome
9. Two Functions of Two Random Variables
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Presentation transcript:

Transformation-invariant clustering using the EM algorithm Brendan Frey and Nebojsa Jojic IEEE Trans on PAMI, 25(1) 2003 yan karklin. cns presentation 08/10/2004

Goal unsupervised learning of image structure regardless of transformation probabilistic description of the data clustering as density modeling – grouping “similar” images together Invariance manifold in data space all points on manifold “equivalent” complex even for basic transformations how to approximate? yan karklin. cns presentation 08/10/2004

Approximating the Invariance Manifold discrete set of points sparse matrices Ti map cannonical feature z into transformed feature x (observed) as a Gaussian probability model, all possible transformations T enumerated yan karklin. cns presentation 08/10/2004

This is what it would look like for... a 2x3 image with pixel-shift translations (wrap-around) z = {T1...T6} = x = yan karklin. cns presentation 08/10/2004

The full statistical model for one feature (one cluster): data, given latent repr: joint of all variables: Gaussian post-transformation with noise Ψ Gaussian pre-transformation with noise Φ for multiple features (clusters), mixture model: yan karklin. cns presentation 08/10/2004

The full statistical model the generative equation: for each “feature”, have a cannonical mean and cannonical variance image contains one of the cannonical features (mixture model) that has undergone one transformation yan karklin. cns presentation 08/10/2004

Inference and is Gassian marginals for inferring parameters T, c, z: yan karklin. cns presentation 08/10/2004

Adapting the rest of parameters pre-transformation noise post-tranformation noise all learned with EM E-step: assume known params, infer P(z, T, c) M-step: update parameters yan karklin. cns presentation 08/10/2004

Experiments recovering 4 clusters 4 clusters w/o transform. yan karklin. cns presentation 08/10/2004

Pre/post transformation noise yan karklin. cns presentation 08/10/2004

Pre/post transformation noise mean variance single Gaussian model of image μ Φ transformation-invariant model, no post-t noise μ Φ Ψ transformation-invariant model, with post-t noise yan karklin. cns presentation 08/10/2004

Conclusions fast (uses sparse matrices, FFT) incorporates pre- and post-transformation noise works on artificial data, clustering simple image sets, cleaning up somewhat contrived examples can be extended to make use of time series data, account for more transformations poor transformation model fixed, pre-specified transformations must be sparse poor feature model Gaussian representation of structure yan karklin. cns presentation 08/10/2004

yan karklin. cns presentation 08/10/2004

yan karklin. cns presentation 08/10/2004