Non-metric affinity propagation for unsupervised image categorization Delbert Dueck and Brendan J. Frey ICCV 2007.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Clustering. How are we doing on the pass sequence? Pretty good! We can now automatically learn the features needed to track both people But, it sucks.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
Automatic determination of skeletal age from hand radiographs of children Image Science Institute Utrecht University C.A.Maas.
1 Machine Learning: Lecture 10 Unsupervised Learning (Based on Chapter 9 of Nilsson, N., Introduction to Machine Learning, 1996)
Clustering by Passing Messages Between Data Points Brendan J. Frey and Delbert Dueck Science, 2007.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Learning Globally-Consistent Local Distance Functions for Shape-Based Image Retrieval and Classification Computer Vision, ICCV IEEE 11th International.
MIT CSAIL Vision interfaces Approximate Correspondences in High Dimensions Kristen Grauman* Trevor Darrell MIT CSAIL (*) UT Austin…
A NOVEL LOCAL FEATURE DESCRIPTOR FOR IMAGE MATCHING Heng Yang, Qing Wang ICME 2008.
Introduction to Bioinformatics
2004/05/03 Clustering 1 Clustering (Part One) Ku-Yaw Chang Assistant Professor, Department of Computer Science and Information.
One-Shot Multi-Set Non-rigid Feature-Spatial Matching
Fast and Compact Retrieval Methods in Computer Vision Part II A. Torralba, R. Fergus and Y. Weiss. Small Codes and Large Image Databases for Recognition.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Slide 1 EE3J2 Data Mining Lecture 16 Unsupervised Learning Ali Al-Shahib.
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Supervised Distance Metric Learning Presented at CMU’s Computer Vision Misc-Read Reading Group May 9, 2007 by Tomasz Malisiewicz.
Scale Invariant Feature Transform (SIFT)
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Learning Globally-Consistent Local Distance Functions for Shape-Based Image Retrieval and Classification Andrea Frome, Yoram Singer, Fei Sha, Jitendra.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Oral Defense by Sunny Tang 15 Aug 2003
Radial-Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Approximation algorithms for large-scale kernel methods Taher Dameh School of Computing Science Simon Fraser University March 29 th, 2010.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Presented by Tienwei Tsai July, 2005
Machine Learning Problems Unsupervised Learning – Clustering – Density estimation – Dimensionality Reduction Supervised Learning – Classification – Regression.
Building local part models for category-level recognition C. Schmid, INRIA Grenoble Joint work with G. Dorko, S. Lazebnik, J. Ponce.
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
VIP: Finding Important People in Images Clint Solomon Mathialagan Andrew C. Gallagher Dhruv Batra CVPR
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Learning Jigsaws for clustering appearance and shape John Winn, Anitha Kannan and Carsten Rother NIPS 2006.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Poselets: Body Part Detectors Trained Using 3D Human Pose Annotations ZUO ZHEN 27 SEP 2011.
Introduction Performance of metric learning is heavily dependent on features extracted Sensitive to Performance of Filters used Need to be robust to changes.
A Statistical Approach to Texture Classification Nicholas Chan Heather Dunlop Project Dec. 14, 2005.
CS654: Digital Image Analysis
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Clustering Wei Wang. Outline What is clustering Partitioning methods Hierarchical methods Density-based methods Grid-based methods Model-based clustering.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Another Example: Circle Detection
Clustering Clustering definition: Partition a given set of objects into M groups (clusters) such that the objects of each group are ‘similar’ and ‘different’
Clustering (1) Clustering Similarity measure Hierarchical clustering
Semi-Supervised Clustering
Efficient Image Classification on Vertically Decomposed Data
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Haim Kaplan and Uri Zwick
Fast Preprocessing for Robust Face Sketch Synthesis
Efficient Image Classification on Vertically Decomposed Data
An Improved Neural Network Algorithm for Classifying the Transmission Line Faults Slavko Vasilic Dr Mladen Kezunovic Texas A&M University.
المشرف د.يــــاســـــــــر فـــــــؤاد By: ahmed badrealldeen
Clustering Wei Wang.
CS4670: Intro to Computer Vision
“Clustering by Passing Messages Between Data Points”
Computational Photography
EM Algorithm and its Applications
Presentation transcript:

Non-metric affinity propagation for unsupervised image categorization Delbert Dueck and Brendan J. Frey ICCV 2007

Outline 1.Introduction 2.Comparison of the NIPS (2006) and Science (2007) Algorithms 3.Unsupervised Categorization of Olivetti Face Images 4.Unsupervised Categorization of Caltech101 Images Using SIFT Features 5.Conclusions

Outline 1.Introduction 2.Comparison of the NIPS (2006) and Science (2007) Algorithms 3.Unsupervised Categorization of Olivetti Face Images 4.Unsupervised Categorization of Caltech101 Images Using SIFT Features 5.Conclusions

Introduction Many vision tasks – produce as output a categorization of input features – require unsupervised categorization of input features as a preprocessing step A powerful approach to representing image categories is to identify exemplars 1.high-order statistics 2.represented efficiently as pointers into the training data

Introduction N training cases 1, …, N Denoting the index of the exemplar representing training case by similarity between training case and by the fitness function is An example of metric similarity If training case is an exemplar is not computed in the same way as

Introduction Maximizing w.r.t, subject to the constraints that for all This problem is NP-hard [12] In [10], the input similarities need not be metric (i.e., need not be symmetric or satisfy the triangle inequality) [12] M. Charikar, S. Guha, A. Tardos, D. B. Shmoys J. Computer and System Science 65, 129. [10] K. Toyama, A. Blake Probabilistic tracking with exemplars in a metric space. Int. J. of Computer Vision 48:1, 9-19.

Introduction Affinity propagation algorithm [15] – clustering face images using Euclidean distance – finding genes using microarray data – airline routing Using non-metric measures of similarity? Compare [15]&[18] [15] B. J. Frey, D. Dueck Clustering by passing messages between data points. Science 315, [18] B. J. Frey, D. Dueck Mixture modelling by affinity propagation. In Advances in Neural Information Processing Systems 18, MIT Press.

Introduction Affinity propagation – “responsibility” r(i,k), sent from data point i to candidate exemplar point k, reflects the accumulated evidence for how well-suited point k is to serve as the exemplar for point i – “availability” a(i,k), sent from candidate exemplar point k to point i, reflects the accumulated evidence for how appropriate it would be for point i to choose point k as its exemplar

Introduction

Outline 1.Introduction 2.Comparison of the NIPS (2006) and Science (2007) Algorithms 3.Unsupervised Categorization of Olivetti Face Images 4.Unsupervised Categorization of Caltech101 Images Using SIFT Features 5.Conclusions

Comparison of the NIPS (2006) and Science (2007) Algorithms NIPS (2006) disallows singleton clusters To compare the two algorithms – clustering patches taken from an image [18] – a tiling of 24 × 24 non-overlapping patches – translation-invariant similarities were computed by comparing smaller 16 × 16 windows – similarity measure : the lowest squared error between windows (over all possible translations)

Comparison of the NIPS (2006) and Science (2007) Algorithms 100,000 k-centers clustering for each K

Outline 1.Introduction 2.Comparison of the NIPS (2006) and Science (2007) Algorithms 3.Unsupervised Categorization of Olivetti Face Images 4.Unsupervised Categorization of Caltech101 Images Using SIFT Features 5.Conclusions

Unsupervised Categorization of Olivetti Face Images Olivetti face database – ten 64 × 64 grey-scale images of each of 40 individuals – extracted a centered 50 × 50 region – normalized the pixel intensities To examine the effect of a wider range in image variation for each individual – extracting the images of 10 individuals, applying 3 in- plane rotations and 3 scalings, producing a data set of 900 images equal to the same common value

Unsupervised Categorization of Olivetti Face Images Performance on squared error Performance on unsupervised image classification Performance using non-metric similarities

Performance on squared error Using the 900 images including rotations and scales similarity between image and image to the negative of the sum of squared pixel differences 10,000 runs of k-centers clustering defined the baseline error to be the 1st percentile of error found by the 10,000 runs of k-centers clustering

Performance on squared error Compare with 1.the best of one million runs of k-centers clustering 2.k-centers clustering initialized by placing centers uniformly along the first principal component of the data 3.the best quantized output of 10 runs of the EM algorithm applied to isotropic mixtures of Gaussians 4.hierarchical agglomerative clustering using the similarities to pick the best new exemplar at each agglomeration step

Performance on squared error

Performance on unsupervised image classification Two approaches to measuring the unsupervised classification error 1.Each learned category is associated with the true category that accounts for the largest number of training cases 2. ‘rate of true association’, pairs of images from the same true category ‘rate of false association’, pairs of images from different true categories

Performance on unsupervised image classification

Performance using non-metric similarities When comparing two face images – Euclidean distance ignores the fact that certain facial features may appear in different positions Making the similarity non-metric can achieve higher classification rates

Performance using non-metric similarities Previous similarity: non-metric similarity : : window out of the center of the image : window of a fixed size

Performance using non-metric similarities

Outline 1.Introduction 2.Comparison of the NIPS (2006) and Science (2007) Algorithms 3.Unsupervised Categorization of Olivetti Face Images 4.Unsupervised Categorization of Caltech101 Images Using SIFT Features 5.Conclusions

Unsupervised Categorization of Caltech101 Images Using SIFT Features Caltech101 image dataset SIFT – For each local feature from the first image, the nearest and second nearest features are found in the second image (Euclidean distance) – If the distance ratio between the nearest and second-nearest neighbors is greater than 0.8, the match is considered significant

Unsupervised Categorization of Caltech101 Images Using SIFT Features : the number of significant feature matches found comparing image with image selected 20 of the 101 classes – faces, motorbikes, binocular, brain, camera, garfield, pagoda, snoopy, stapler, stop sign, … – 1230 images

An example of a category learned by affinity propagation

Unsupervised Categorization of Caltech101 Images Using SIFT Features

Outline 1.Introduction 2.Comparison of the NIPS (2006) and Science (2007) Algorithms 3.Unsupervised Categorization of Olivetti Face Images 4.Unsupervised Categorization of Caltech101 Images Using SIFT Features 5.Conclusions

Conclusions Affinity propagation can be used to achieve high classification rates Using non-metric similarity functions increases classification rates