A Bidirectional Matching Algorithm for Deformable Pattern Detection with Application to Handwritten Word Retrieval by K.W. Cheung, D.Y. Yeung, R.T. Chin.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Pattern Recognition and Machine Learning
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
Pattern Recognition and Machine Learning
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
Proportion Priors for Image Sequence Segmentation Claudia Nieuwenhuis, etc. ICCV 2013 Oral.
Chapter 4: Linear Models for Classification
Segmentation and Fitting Using Probabilistic Methods
Automatic Identification of Bacterial Types using Statistical Image Modeling Sigal Trattner, Dr. Hayit Greenspan, Prof. Shimon Abboud Department of Biomedical.
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/15/12.
Hidden Markov Model based 2D Shape Classification Ninad Thakoor 1 and Jean Gao 2 1 Electrical Engineering, University of Texas at Arlington, TX-76013,
1 Vertically Integrated Seismic Analysis Stuart Russell Computer Science Division, UC Berkeley Nimar Arora, Erik Sudderth, Nick Hay.
EE-148 Expectation Maximization Markus Weber 5/11/99.
Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
1 Abstract This paper presents a novel modification to the classical Competitive Learning (CL) by adding a dynamic branching mechanism to neural networks.
Lecture 5: Learning models using EM
Expectation Maximization Method Effective Image Retrieval Based on Hidden Concept Discovery in Image Database By Sanket Korgaonkar Masters Computer Science.
Relevance Feedback based on Parameter Estimation of Target Distribution K. C. Sia and Irwin King Department of Computer Science & Engineering The Chinese.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Clustering.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
Bayesian Frameworks for Deformable Pattern Classification and Retrieval by Kwok-Wai Cheung January 1999.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
SVCL Automatic detection of object based Region-of-Interest for image compression Sunhyoung Han.
Characterizing activity in video shots based on salient points Nicolas Moënne-Loccoz Viper group Computer vision & multimedia laboratory University of.
DTU Medical Visionday May 27, 2009 Generative models for automated brain MRI segmentation Koen Van Leemput Athinoula A. Martinos Center for Biomedical.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Particle Filters for Shape Correspondence Presenter: Jingting Zeng.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Conjugate Priors Multinomial Gaussian MAP Variance Estimation Example.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
First topic: clustering and pattern recognition Marc Sobel.
Chapter 4: Pattern Recognition. Classification is a process that assigns a label to an object according to some representation of the object’s properties.
Christopher M. Bishop, Pattern Recognition and Machine Learning.
Bayesian Generalized Kernel Mixed Models Zhihua Zhang, Guang Dai and Michael I. Jordan JMLR 2011.
Object Recognition Based on Shape Similarity Longin Jan Latecki Computer and Information Sciences Dept. Temple Univ.,
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
Biointelligence Laboratory, Seoul National University
Paper Reading Dalong Du Nov.27, Papers Leon Gu and Takeo Kanade. A Generative Shape Regularization Model for Robust Face Alignment. ECCV08. Yan.
Variational Bayesian Methods for Audio Indexing
Lecture 2: Statistical learning primer for biologists
Probability and Statistics in Vision. Probability Objects not all the sameObjects not all the same – Many possible shapes for people, cars, … – Skin has.
Maximum Entropy Discrimination Tommi Jaakkola Marina Meila Tony Jebara MIT CMU MIT.
CS Statistical Machine learning Lecture 12 Yuan (Alan) Qi Purdue CS Oct
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Detection of Anatomical Landmarks Bruno Jedynak Camille Izard Georgetown University Medical Center Friday October 6, 2006.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Lecture 1.31 Criteria for optimal reception of radio signals.
CS479/679 Pattern Recognition Dr. George Bebis
Ch3: Model Building through Regression
Special Topics In Scientific Computing
SMEM Algorithm for Mixture Models
Video Compass Jana Kosecka and Wei Zhang George Mason University
Pattern Recognition and Machine Learning
INTRODUCTION TO Machine Learning
Lecture 11 Generalizations of EM.
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
EM Algorithm and its Applications
Presentation transcript:

A Bidirectional Matching Algorithm for Deformable Pattern Detection with Application to Handwritten Word Retrieval by K.W. Cheung, D.Y. Yeung, R.T. Chin {wiliam, dyyeung,

Model representation H j (Cubic B-spline) Model shape parameter w (Control pts.) parameter space w1w1 w2w2 w3w3 Hj(w3)Hj(w3) Hj(w2)Hj(w2) Hj(w1)Hj(w1) Modeling

Criterion Function Formulation Model Deformation Criterion Data Mismatch Criterion Mahalanobis distance Negative log of product of a mixture of Gaussians

Bayesian Formulation Prior distribution (without data) Likelihood function Posterior distribution (with data)

Bayesian Inference: Matching Matching by maximum a posteriori (MAP) estimation using the EM algorithm. parameter space MAP estimate

The Outlier Problem The mixture of Gaussians noise model fails when some gross errors (outliers) are present. Badly Segmented InputWell Segmented Input True data Outliers

Reverse Framework Model, H i (Uniform prior) Shape parameter, w (Prior distribution of w) Regularization parameter,  (Uniform prior) Data, D (Likelihood function of w) Stroke width parameter,  (Uniform prior) Multivariate Gaussian Mixture of Gaussians Direction of Generation From Model to Data

A Dual View of Generativity The Sub-part Problem The Outlier Problem

Forward Framework Model, H i Shape parameter, w Regularization parameter,  (Uniform prior) Data, D (Uniform prior) Model localization parameter,  (Uniform prior) Multivariate Gaussian Mixture of Gaussians (each data point is a Gaussian center) Direction of Generation From Data to Model

New Criterion Function Sub-data Mismatch Criterion Negative log of product of a mixture of Gaussians Old Data Mismatch Criterion

Forward Matching Matching –Optimal estimates {w *, A *, T *,  *,  * } are obtained by maximizing –Again, the EM algorithm is used. The old model parameter prior Model parameters generated by the data

Frameworks Comparison Outlier problem solved Sub-part problem solved

Bidirectional Matching Algorithm A matching algorithm is proposed which possesses the advantages of the two frameworks. The underlying idea is to try to obtain a correspondence between the model and data such that the model looks like the data AND vice versa (i.e., the data mismatch measures for the two frameworks should both be small.).

Bidirectional Matching Algorithm Initialization by Chamfer matching Forward Matching Compute the data mismatch measures for the two frameworks, E mis and E sub-mis Reverse Matching E mis > E sub-mis ?  :=(1+  )  if  :=4 Converge ? yes no

Experiment (I) Forward Matching Reverse Matching Bidirectional Matching

Experiment (I) * Results are obtained by visual checking. To extract leftmost chars. from handwritten words. Test Set - CEDAR database Model Initialization by Chamfer matching

Experiment (II) To retrieve handwritten words with its leftmost character similar to an input shape query. Test Set - CEDAR database –100 handwritten city name images Query Set

Experiment (II) Best N Approach Recall = 59% Precision = 43% # of candidates = 10

Experiment (II) Evidence Thresholding Recall = 65% Precision = 45% Averaged # of candidates = 12.7