Venus. Classification Faces – Different Faces -- Same.

Slides:



Advertisements
Similar presentations
Alignment Visual Recognition “Straighten your paths” Isaiah.
Advertisements

3D Geometry for Computer Graphics
Order Structure, Correspondence, and Shape Based Categories Presented by Piotr Dollar October 24, 2002 Stefan Carlsson.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Dimension reduction (1)
PCA + SVD.
Mapping: Scaling Rotation Translation Warp
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Face Recognition and Biometric Systems
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Image alignment Image from
Linear Transformations
Computer Graphics Recitation 5.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
Object Recognition with Informative Features and Linear Classification Authors: Vidal-Naquet & Ullman Presenter: David Bradley.
ON THE IMPROVEMENT OF IMAGE REGISTRATION FOR HIGH ACCURACY SUPER-RESOLUTION Michalis Vrigkas, Christophoros Nikou, Lisimachos P. Kondi University of Ioannina.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Face Recognition Using Eigenfaces
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Orthogonality and Least Squares
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Recognition of object by finding correspondences between features of a model and an image. Alignment repeatedly hypothesize correspondences between minimal.
Object Recognition Vision Class Object Classes.
Object recognition. Object Classes Individual Recognition.
Object Recognizing. Object Classes Individual Recognition.
Summarized by Soo-Jin Kim
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Chapter 9 Superposition and Dynamic Programming 1 Chapter 9 Superposition and dynamic programming Most methods for comparing structures use some sorts.
Object Recognizing. Recognition -- topics Features Classifiers Example ‘winning’ system.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
视觉的三维运动理解 刘允才 上海交通大学 2002 年 11 月 16 日 Understanding 3D Motion from Images Yuncai Liu Shanghai Jiao Tong University November 16, 2002.
AN ORTHOGONAL PROJECTION
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Paper Reading Dalong Du Nov.27, Papers Leon Gu and Takeo Kanade. A Generative Shape Regularization Model for Robust Face Alignment. ECCV08. Yan.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Determining 3D Structure and Motion of Man-made Objects from Corners.
2D-LDA: A statistical linear discriminant analysis for image matrix
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
Meeting 8: Features for Object Classification Ullman et al.
4. Affine transformations. Reading Required:  Watt, Section 1.1. Further reading:  Foley, et al, Chapter  David F. Rogers and J. Alan Adams,
Unsupervised Learning II Feature Extraction
Unsupervised Learning II Feature Extraction
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Sect. 4.5: Cayley-Klein Parameters 3 independent quantities are needed to specify a rigid body orientation. Most often, we choose them to be the Euler.
High resolution product by SVM. L’Aquila experience and prospects for the validation site R. Anniballe DIET- Sapienza University of Rome.
776 Computer Vision Jan-Michael Frahm Spring 2012.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Continuum Mechanics (MTH487)
Continuum Mechanics (MTH487)
LECTURE 11: Advanced Discriminant Analysis
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Watermarking with Side Information
Shape matching and object recognition using shape contexts
Principal Component Analysis
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
A Fast Fixed-Point Algorithm for Independent Component Analysis
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

Venus

Classification

Faces – Different

Faces -- Same

Lighting affects appearance

Three-point alignment

Object Alignment Given three model points P 1, P 2, P 3, and three image points p 1, p 2, p 3, there is a unique transformation (rotation, translation, scale) that aligns the model with the image.  (SR + d)P i = p i

Alignment -- comments The projection is orthographic projection (combined with scaling). The 3 points are required to be non-collinear. The transformation is determined up to a reflection of the points about the image plane and translation in depth.

Proof of the 3-point Alignment: The 3 3-D points are P1, P2, P3. We can assume that they are initially in the image plane. In the 2-D image we get q1, q2, q3. The transformation P1 > q1, P2 > q2, P3 > q3, defines a unique linear transformation of the plane, L(x). We can easily recover this transformation. L is a 2*2 matrix. We fix the origin at P1 = q1. We have two more points that define 4 linear equations for the elements of L. We now choose two orthogonal vectors E1 and E2 in the original plane of P1, P2, P3. We can compute E1’ = L(E1), E2’ = L(E2). We seek a scaling S, Rotation R, so that the projection of SR(E1) = E1’ and SR(E2) = E2’. Let SR(E1) (without the projection) be V1 and SR(E2) = V2. V1 is E1’ plus a depth component, that is, V1 = E1’ + c1z, where z is a unit vector in the z direction. Similarly, V2 = E2’ + c2z. We wish to recover c1 and c2. This will give the transformation between the points (show that it is unique, and it will be possible to recover the transformation). We know that the scalar product (V1 V2) = 0. (E1’ + c1z) (E1’ + c1z) = 0 Therefore c1c2 = -(E’1 E’2). The magnitude -(E’1 E’2) is measurable in the image, call it C12, therefore c1c2 = c12. Also |V1| = |V2|. Therefore (E1’ + c1z) (E1’ + c1z) = (E1’ + c1z) (E1’ + c1z). This implies c1 2 - c2 2 = k12, where k12 is a measurable quantity in the image (it is |E’1 2 | - |E’2 2 |. The two equation of c1 c2 are: c1c2 = c12 c1 2 - c2 2 = k12 and they have a unique solution. One way of seeing this is by setting a complex number Z = c1 + ic2. Then Z 2 = k12 + ic12. Therefore, Z 2 is measurable. We take the square root and get Z, therefore c1, c2. There are exactly two roots giving the two mirror reflection solutions.

Car Recognition

Car Models

Alignment: Cars

Alignment: Unmatch

Face Alignment

Linear Combination of Views

O is a set of object points. I 1, I 2, I 3, are three images of O from different views. N is a novel view of O. Then O is the linear combination of I 1, I 2, I 3.

Car Recognition

VW – SAAB

LC – Car Images

Linear Combination: Faces

Classification

Structural descriptions

RBC

Structural Description G2 G4 G3 G1 G4 Above Right-of Left-of Touch

Fragment-based Representation

Mutual Information Mutual information Entropy Binary variable -H(C) = P(C=1)Log(P(C=1) + P(C=0)Log(P(C=0)

Mutual information H(C) when F=1H(C) when F=0 I(C;F) = H(C) – H(C/F) F=1 F=0 H(C)

Selecting Fragments

Fragments Selection For a set of training images: Generate candidate fragments –Measure p(F/C), p(F/NC) Compute mutual information Select optimal fragment After k fragments: Maximizing the minimal addition in mutual information with respect to each of the first k fragments

Optimal Face Fragments

Face Fragments by Type 1d. 1e. 1-st. Merit Weight nd 3-rd 4-th

Low-resolution Car Fragments Front – Middle - Back

Intermediate Complexity

Fragment ‘Weight’ Likelihood ratio: Weight of F:

Combining fragments w1w1 wkwk w2w2 D1D1 D2D2 DkDk

Non-optimal Fragments FragmentssizedetectionF/A Optimal11%950 Small3%9730 Large33%390 Same total area covered (8*object), on regular grid

Training & Test Images Frontal faces without distinctive features (K:496,W:385) Minimize background by cropping Training images for extraction: 32 for each class Training images for evaluation: 100 for each class Test images: 253 for Western and 364 for Korean

Training – Fragment Extraction

Western Fragment Score Weight Korean Fragment Score Weight Extracted Fragments

Classifying novel images Westerner Korean Unknown kFkF wFwF Detect Fragments Compare Summed Weights Decision

Effect of Number of Fragments 7 fragments: 95%, 80 fragments: 100% Inherent redundancy of the features Slight violation of independence assumption

Comparison with Humans Algorithm outperformed humans for low resolution images

Class examples

Distinctive Features