Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research.

Slides:



Advertisements
Similar presentations
Real-time on-line learning of transformed hidden Markov models Nemanja Petrovic, Nebojsa Jojic, Brendan Frey and Thomas Huang Microsoft, University of.
Advertisements

Part 2: Unsupervised Learning
Basic Steps 1.Compute the x and y image derivatives 2.Classify each derivative as being caused by either shading or a reflectance change 3.Set derivatives.
A Two-Step Approach to Hallucinating Faces: Global Parametric Model and Local Nonparametric Model Ce Liu Heung-Yeung Shum Chang Shui Zhang CVPR 2001.
Bayesian Belief Propagation
2010 Winter School on Machine Learning and Vision Sponsored by Canadian Institute for Advanced Research and Microsoft Research India With additional support.
Teg Grenager NLP Group Lunch February 24, 2005
Weakly supervised learning of MRF models for image region labeling Jakob Verbeek LEAR team, INRIA Rhône-Alpes.
University of Toronto Oct. 18, 2004 Modelling Motion Patterns with Video Epitomes Machine Learning Group Meeting University of Toronto Oct. 18, 2004 Vincent.
Loris Bazzani*, Marco Cristani*†, Alessandro Perina*, Michela Farenzena*, Vittorio Murino*† *Computer Science Department, University of Verona, Italy †Istituto.
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
Víctor Ponce Miguel Reyes Xavier Baró Mario Gorga Sergio Escalera Two-level GMM Clustering of Human Poses for Automatic Human Behavior Analysis Departament.
Prof. Carolina Ruiz Computer Science Department Bioinformatics and Computational Biology Program WPI WELCOME TO BCB4003/CS4803 BCB503/CS583 BIOLOGICAL.
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
GrabCut Interactive Image (and Stereo) Segmentation Carsten Rother Vladimir Kolmogorov Andrew Blake Antonio Criminisi Geoffrey Cross [based on Siggraph.
Belief Propagation on Markov Random Fields Aggeliki Tsoli.
Graphical models, belief propagation, and Markov random fields 1.
Joint Estimation of Image Clusters and Image Transformations Brendan J. Frey Computer Science, University of Waterloo, Canada Beckman Institute and ECE,
Variational Inference and Variational Message Passing
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Problem Sets Problem Set 3 –Distributed Tuesday, 3/18. –Due Thursday, 4/3 Problem Set 4 –Distributed Tuesday, 4/1 –Due Tuesday, 4/15. Probably a total.
Latent Dirichlet Allocation a generative model for text
Audio-Visual Graphical Models Matthew Beal Gatsby Unit University College London Nebojsa Jojic Microsoft Research Redmond, Washington Hagai Attias Microsoft.
ICML 2003 © Sergey Kirshner, UC Irvine Unsupervised Learning with Permuted Data Sergey Kirshner Sridevi Parise Padhraic Smyth School of Information and.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Learning In Bayesian Networks. Learning Problem Set of random variables X = {W, X, Y, Z, …} Training set D = { x 1, x 2, …, x N }  Each observation specifies.
Image Analysis and Markov Random Fields (MRFs) Quanren Xiong.
Correlated Topic Models By Blei and Lafferty (NIPS 2005) Presented by Chunping Wang ECE, Duke University August 4 th, 2006.
DTU Medical Visionday May 27, 2009 Generative models for automated brain MRI segmentation Koen Van Leemput Athinoula A. Martinos Center for Biomedical.
1 Physical Fluctuomatics 5th and 6th Probabilistic information processing by Gaussian graphical model Kazuyuki Tanaka Graduate School of Information Sciences,
Directed - Bayes Nets Undirected - Markov Random Fields Gibbs Random Fields Causal graphs and causality GRAPHICAL MODELS.
City University of Hong Kong 18 th Intl. Conf. Pattern Recognition Self-Validated and Spatially Coherent Clustering with NS-MRF and Graph Cuts Wei Feng.
EE462 MLCV Lecture (1.5 hours) Segmentation – Markov Random Fields Tae-Kyun Kim 1.
Sampletalk Technology Presentation Andrew Gleibman
IJCAI 2003 Workshop on Learning Statistical Models from Relational Data First-Order Probabilistic Models for Information Extraction Advisor: Hsin-His Chen.
第十讲 概率图模型导论 Chapter 10 Introduction to Probabilistic Graphical Models
University of Toronto Aug. 11, 2004 Learning the “Epitome” of a Video Sequence Information Processing Workshop 2004 Vincent Cheung Probabilistic and Statistical.
Markov Random Fields Probabilistic Models for Images
Eric Xing © Eric CMU, Machine Learning Latent Aspect Models Eric Xing Lecture 14, August 15, 2010 Reading: see class homepage.
Discovering Deformable Motifs in Time Series Data Jin Chen CSE Fall 1.
Putting Context into Vision Derek Hoiem September 15, 2004.
14 October, 2010LRI Seminar 2010 (Univ. Paris-Sud)1 Statistical performance analysis by loopy belief propagation in probabilistic image processing Kazuyuki.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Generative Models for Image Understanding Nebojsa Jojic and Thomas Huang Beckman Institute and ECE Dept. University of Illinois.
Epitomic Location Recognition A generative approach for location recognition K. Ni, A. Kannan, A. Criminisi and J. Winn In proc. CVPR Anchorage,
Look Over Here: Attention-Directing Composition of Manga Elements Ying Cao Rynson W.H. Lau Antoni B. Chan SIGGRAPH
Probabilistic Models for Discovering E-Communities Ding Zhou, Eren Manavoglu, Jia Li, C. Lee Giles, Hongyuan Zha The Pennsylvania State University WWW.
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Topic Models Presented by Iulian Pruteanu Friday, July 28 th, 2006.
Expectation-Maximization (EM) Algorithm & Monte Carlo Sampling for Inference and Approximation.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
Towards Total Scene Understanding: Classification, Annotation and Segmentation in an Automatic Framework N 工科所 錢雅馨 2011/01/16 Li-Jia Li, Richard.
Markov Random Fields & Conditional Random Fields
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Visual and auditory scene analysis using graphical models Nebojsa Jojic
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Edge Preserving Spatially Varying Mixtures for Image Segmentation Giorgos Sfikas, Christophoros Nikou, Nikolaos Galatsanos (CVPR 2008) Presented by Lihan.
A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation Yee W. Teh, David Newman and Max Welling Published on NIPS 2006 Discussion.
10 October, 2007 University of Glasgow 1 EM Algorithm with Markov Chain Monte Carlo Method for Bayesian Image Analysis Kazuyuki Tanaka Graduate School.
Fill-in-The-Blank Using Sum Product Network
Announcements Final Project 3 artifacts Evals
Today.
Modelling data static data modelling.
Background Perception Animation - principles Animation - history
Transformation-invariant clustering using the EM algorithm
Announcements Guest lecture next Tuesday
Expectation-Maximization & Belief Propagation
Presentation transcript:

Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research

Learning low-level vision, Freeman and Pasztor, Proc. ICCV99. This paper proposes a persuasive general approach to inference in image arrays. The classic application is restoration of degraded images, including super-resolution. This is a classic Bayesian piece of work, the latest in an honourable succession that began with “intrinsic images” (Barrow and Tanenbaum 1978,) and moved on the regularisation (Poggio et al. 1983,) via Markov random fields (MRF) and Gibbs sampling (Geman and Geman 1984,) and probabilistic graphical models (Pearl 1988.) It characterises the striking new trend towards exemplar-based learning. It’s certainly bracing stuff- where’s the catch? Learning graphical models of images, videos and their spatial transformations, Frey and Jojic, Proc. UAI2000. They have put together an exciting story that uses “latent variable modelling,” second nature in the probabilistic inference (NIPS) community, to explain and analyse images and image sequences. The exciting part is that, apparently, all you have do is describe how an image is constructed, and you automatically get an analysis of the images. The trick is, you just take the same description and push it through EM machine. It seems almost miraculous, in the same way that declarative programming (PROLOG) seems miraculous, that the analytical machinery is generated for you automatically. Is there a catch here, or should we all be doing this?

Probabilistic Graphical Models for image motion analysis (Frey and Jojic, 99/00) x z c z Latent image model. x is the unknown (or latent) image. z is the image produced by the model, or found in real life. e.g. p(z|x)=N(x,  ) Mixture Model. c is the unknown cluster centre. z is the sampled value. e.g. p(z|c)=N(  c,  c ) z c Continuous random variableDiscrete random variable

z Transformed latent image model. P(l=L) =  l, p(z|x,l)=N(T l x,  ) Principle Components/Factor Analysis. p(y) = N(0,1)parameters x=  y+  expansion p(z)=N(x,  )noise addition x z l x y

p(z|x,l)=N(T l x,  l +  ) cy x z l cc c=1c=2c=3 Results: Image motion analysis by PGM

p(z|x,l)=N(T l x,  l +  ) cy x z l cc c=1c=2c=3 Results: Image motion analysis by PGM Video summary Image segmentation Sensor noise removal Image stabilisation