An Invariant Large Margin Nearest Neighbour Classifier Results Matching Faces from TV Video Aim: To learn a distance metric for invariant nearest neighbour.

Slides:



Advertisements
Similar presentations
Semidefinite Programming Machines
Advertisements

POSE–CUT Simultaneous Segmentation and 3D Pose Estimation of Humans using Dynamic Graph Cuts Mathieu Bray Pushmeet Kohli Philip H.S. Torr Department of.
O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD.
Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman.
3D Geometry for Computer Graphics
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Three things everyone should know to improve object retrieval
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Linear Classifiers (perceptrons)
Globally Optimal Estimates for Geometric Reconstruction Problems Tom Gilat, Adi Lakritz Advanced Topics in Computer Vision Seminar Faculty of Mathematics.
Data Mining Classification: Alternative Techniques
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1.
An Analysis of Convex Relaxations (PART I) Minimizing Higher Order Energy Functions (PART 2) Philip Torr Work in collaboration with: Pushmeet Kohli, Srikumar.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Probabilistic Graph and Hypergraph Matching
Geometry (Many slides adapted from Octavia Camps and Amitabh Varshney)
A Convex Optimization Approach for Depth Estimation Under Illumination Variation Wided Miled, Student Member, IEEE, Jean-Christophe Pesquet, Senior Member,
An Analysis of Convex Relaxations M. Pawan Kumar Vladimir Kolmogorov Philip Torr for MAP Estimation.
Recognising Panoramas
P 3 & Beyond Solving Energies with Higher Order Cliques Pushmeet Kohli Pawan Kumar Philip H. S. Torr Oxford Brookes University CVPR 2007.
An Introduction to Kernel-Based Learning Algorithms K.-R. Muller, S. Mika, G. Ratsch, K. Tsuda and B. Scholkopf Presented by: Joanna Giforos CS8980: Topics.
Improved Moves for Truncated Convex Models M. Pawan Kumar Philip Torr.
Efficiently Solving Convex Relaxations M. Pawan Kumar University of Oxford for MAP Estimation Philip Torr Oxford Brookes University.
Exploiting Duality (Particularly the dual of SVM) M. Pawan Kumar VISUAL GEOMETRY GROUP.
Supervised Distance Metric Learning Presented at CMU’s Computer Vision Misc-Read Reading Group May 9, 2007 by Tomasz Malisiewicz.
1 Segmentation with Scene and Sub-Scene Categories Joseph Djugash Input Image Scene/Sub-Scene Classification Segmentation.
Relaxations and Moves for MAP Estimation in MRFs M. Pawan Kumar STANFORDSTANFORD Vladimir KolmogorovPhilip TorrDaphne Koller.
Invariant Large Margin Nearest Neighbour Classifier M. Pawan Kumar Philip Torr Andrew Zisserman.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Measuring Uncertainty in Graph Cut Solutions Pushmeet Kohli Philip H.S. Torr Department of Computing Oxford Brookes University.
Distance Metric Learning for Large Margin Nearest Neighbor Classification (LMNN) NIPS 2006 Kilian Q. Weinberger, John Blitzer and Lawrence K. Saul.
Nearest-Neighbor Classifiers Sec minutes of math... Definition: a metric function is a function that obeys the following properties: Identity:
Classification and Prediction: Regression Analysis
Today Wrap up of probability Vectors, Matrices. Calculus
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
This week: overview on pattern recognition (related to machine learning)
Logistic Regression L1, L2 Norm Summary and addition to Andrew Ng’s lectures on machine learning.
Chapter 8 The k-Means Algorithm and Genetic Algorithm.
Learning a Small Mixture of Trees M. Pawan Kumar Daphne Koller Aim: To efficiently learn a.
Discrete Optimization Lecture 2 – Part I M. Pawan Kumar Slides available online
Image Registration as an Optimization Problem. Overlaying two or more images of the same scene Image Registration.
Low-Rank Kernel Learning with Bregman Matrix Divergences Brian Kulis, Matyas A. Sustik and Inderjit S. Dhillon Journal of Machine Learning Research 10.
Efficient Discriminative Learning of Parts-based Models M. Pawan Kumar Andrew Zisserman Philip Torr
Spoken Language Group Chinese Information Processing Lab. Institute of Information Science Academia Sinica, Taipei, Taiwan
Associative Hierarchical CRFs for Object Class Image Segmentation
O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD.
An Approximate Nearest Neighbor Retrieval Scheme for Computationally Intensive Distance Measures Pratyush Bhatt MS by Research(CVIT)
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Visual homing using PCA-SIFT
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Nonparametric Methods: Support Vector Machines
Metric Learning for Clustering
Recognition using Nearest Neighbor (or kNN)
Pawan Lingras and Cory Butz
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
Shape matching and object recognition using shape contexts
Hyperparameters, bias-variance tradeoff, validation
Multiple Instance Learning: applications to computer vision
CS 188: Artificial Intelligence Spring 2007
Learning Layered Motion Segmentations of Video
2D transformations (a.k.a. warping)
CS 188: Artificial Intelligence Fall 2008
Introduction to Radial Basis Function Networks
ECE734 Project-Scale Invariant Feature Transform Algorithm
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

An Invariant Large Margin Nearest Neighbour Classifier Results Matching Faces from TV Video Aim: To learn a distance metric for invariant nearest neighbour classification Large Margin NN (LMNN) Invariant LMNN (ILMNN) Drawbacks A Property of Polynomial Transformations x Lx xixi xjxj Same class points closer Different class points away Polynomial Transformations Euclidean Distance Current Fix Overfitting - O(D 2 ) parameters Use rank deficient L (non-convex) No invariance to transformations Add synthetically transformed data Inefficient Inaccurate Transformation TrajectoryFinite Transformed Data Nearest Neighbour Classifier (NN) Multiple classes Labelled training data Euclidean Transformation -5 o ≤  ≤ 5 o -3 ≤ t x ≤ 3 pixels-3 ≤ t y ≤ 3 pixels MethodExp1Exp2 kNN-E L 2 -LMNN D-LMNN DD-LMNN L 2 -ILMNN D-ILMNN DD-ILMNN M-SVM SVM-KNN Invariance to changes in position of features First Experiment (Exp1) Second Experiment (Exp2) True Positives Randomly permute data Train/Val/Test - 30/30/40 Suitable for NN No random permutation Train/Val/Test - 30/30/40 Not so suitable for NN 11 characters 24,244 faces min ∑ ij d ij = (x i -x j ) T L T L(x i -x j ) min ∑ ij (x i -x j ) T M(x i -x j ) M 0 (Positive Semidefinite) Semidefinite Program (SDP) (x i -x k ) T M(x i -x k )- (x i -x j ) T M(x i -x j ) ≥ 1 - e ijk min ∑ ijk e ijk, e ijk ≥ 0 EXP2EXP2 EXP1EXP1 2D Rotation Example a b cos θ sin θ -sin θ cos θ 1-θ 2 /2 -(θ-θ 3 /6) (θ-θ 3 /6) 1-θ 2 /2 a b a 1 θ b-a/2b/6 ba-b/2-a/6 θ2θ2 θ3θ3  = Univariate Transformation X Taylor’s Series Approximation X  Multivariate Polynomial Transformations - Euclidean, Similarity, Affine General Form: T(x,  ) = X  Distance between Polynomial Trajectories Sum of Squares Of Polynomials SD - Representability Lasserre, 2001 xixi xjxj D ij  P’ 0 SD - Representability of Segments M ij m ij x Lx xkxk Commonly used in Computer Vision Minimize Max Distance of same class trajectories Maximize Min Distance of different class trajectories Euclidean Distance Learnt Distance Non-convex Approximation:  ij d ij  ij = M ij m ij d ij min ∑ ij  ij d ij Convex SDP D ik (  1,  2 ) -  ij d ij ≥ 1 - e ijk min ∑ ijk e ijk, e ijk ≥ 0 P ijk 0 (SD-representability) M. Pawan Kumar Philip H.S. Torr Andrew Zisserman Find nearest neighbours, classify Typically, Euclidean distance used Our Contributions Adding Invariance using Polynomial Transformations Overcome above drawbacks of LMNN Regularization of parameters L or M Invariance to Polynomial Transformations Preserve Convexity P 0 Regularization Prevent overfitting Retain convexity L 2 -LMNN Minimize L 2 norm of parameter L min ∑ i M(i,i) D-LMNN Learn diagonal L  diagonal M M(i,j) = 0, i ≠ j DD-LMNN Learn a diagonally dominant M min ∑ i,j |M(i,j)|, i ≠ j 11 22 Weinberger, Blitzer, Saul - NIPS 2005 Globally Optimum M (and L) Euclidean Distance Learnt Distance Euclidean Distance Learnt Distance (θ 1,θ 2 ) xixi xjxj xkxk d ij Accuracy MethodTrainTest kNN-E-62.2 s L 2 -LMNN4 h62.2 s D-LMNN1 h53.2 s DD-LMNN2 h50.5 s L 2 -ILMNN24 h62.2 s D-ILMNN8 h48.2 s DD-ILMNN24 h51.9 s M-SVM300 s446.6 s SVM-KNN s Timings Precision-Recall Vision Group, Oxford Brookes University Visual Geometry Group, Oxford University R D -T ≤  ≤ T 11 22 D ik (  1,  2 )