Lecture 8:Eigenfaces and Shared Features

Slides:



Advertisements
Similar presentations
EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Advertisements

Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Covariance Matrix Applications
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
EigenFaces.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Machine Learning Lecture 8 Data Processing and Representation
Dimensionality Reduction PCA -- SVD
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Logistic Regression Principal Component Analysis Sampling TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA A A A.
Principal Component Analysis
Singular Value Decomposition COS 323. Underconstrained Least Squares What if you have fewer data points than parameters in your function?What if you have.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Project 4 out today –help session today –photo session today Project 2 winners Announcements.
Face Recognition Jeremy Wyatt.
Face Recognition Using Eigenfaces
SVD and PCA COS 323. Dimensionality Reduction Map points in high-dimensional space to lower number of dimensionsMap points in high-dimensional space to.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Principal Component Analysis Barnabás Póczos University of Alberta Nov 24, 2009 B: Chapter 12 HRF: Chapter 14.5.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
Face Detection and Recognition
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
SVD(Singular Value Decomposition) and Its Applications
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Presented By Wanchen Lu 2/25/2013
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
CSE 446 Dimensionality Reduction and PCA Winter 2012 Slides adapted from Carlos Guestrin & Luke Zettlemoyer.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Dimensionality Reduction and Principle Components Analysis
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
Eigen & Singular Value Decomposition
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
University of Ioannina
LECTURE 10: DISCRIMINANT ANALYSIS
Dimensionality Reduction
CS 2750: Machine Learning Dimensionality Reduction
Face Recognition and Feature Subspaces
Lecture: Face Recognition and Feature Reduction
Recognition: Face Recognition
Machine Learning Dimensionality Reduction
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Singular Value Decomposition
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Recitation: SVD and dimensionality reduction
Singular Value Decomposition SVD
CSSE463: Image Recognition Day 25
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
CS4670: Intro to Computer Vision
LECTURE 09: DISCRIMINANT ANALYSIS
CSSE463: Image Recognition Day 25
Lecture 13: Singular Value Decomposition (SVD)
Announcements Project 2 artifacts Project 3 due Thursday night
Announcements Project 4 out today Project 2 winners help session today
Announcements Artifact due Thursday
Announcements Artifact due Thursday
The “Margaret Thatcher Illusion”, by Peter Thompson
Presentation transcript:

Lecture 8:Eigenfaces and Shared Features CAP 5415: Computer Vision Fall 2006

What I did on Monday

What I did on Monday

Questions on PS2?

Task: Face Recognition ? From “Bayesian Face Recognition” by Moghaddam, Jebara, and Pentland

Easiest way to recognize Compute the squared difference between the test image and each of the examples Nearest-Neighbor Classifier

Easiest way to recognize Problems Slow Distracted by things not related to faces

Easiest way to recognize How can we find a way to represent a face with a few numbers?

Linear Algebra to the Rescue Raster-Scan the image into a vector

Stack Examples into a Matrix If we have M example faces in the database we get a matrix: One face

Introducing the SVD Stands for Singular Value Decomposition The matrix M can be factored as m x n n x n m x n m x m

Special Properties of these Matrices U – unitary, for a real-valued matrix, that means UTU = I Σ – diagonal. Only non-zero along the diagonal. These non-zero entries are called the singular values V – also unitary

Another way to think of this ΣV is a set of weights that tells us how to add together the columns of U to produce the data What if there are too many observations in each column?

Approximating M Modify Σ by setting all but a few of the singular values to zero. Effectively, only using some columns of U to reconstruct M Same as using some small number of parameters to measure a N-dimensional signal

Approximating M Question: If I were going to approximate M using a few columns of U, which few should I use? Answer: Find the rows in Σ with the largest singular values and use the corresponding columns of M

Simple Example

Calculate SVD [u,s,v] = svd([x' y']') >> u u = -0.6715 -0.7410 -0.6715 -0.7410 -0.7410 0.6715 >> s(:,1:2) ans = 5.5373 0 0 0.7852 >>

U defines a set of axes [u,s,v] = svd([x' y']') >> u u = -0.6715 -0.7410 -0.7410 0.6715 >> s(:,1:2) ans = 5.5373 0 0 0.7852 >>

Now, let's reduce these points to one dimension >> sp=s; >> sp(2,2)=0; >> nm = u*sp*v'; >>

What have we done? We have used one dimension to describe the points instead of two We used the SVD to find the best set of axes Using the SVD minimizes the Frobenius norm of difference between M and ~M In other words, minimizes squared error

How does this relate to faces? Let's assume that we have images of faces with N pixels Don't N pixels to represent the face Assume that the space of face images can be represented by relatively few number of axes Called eigenfaces From “Probabilistic Visual Learning” by Moghaddam and Pentland

Why “eigenfaces”? If you subtract the mean, MMT is the covariance matrix of the data The matrix U also contains the eigenvectors of MMT These eigenvectors are the vectors that maximize the variance of the reconstruction Same result, different motivation than SVD Called Principle Components Analysis or PCA Take Pattern Recognition!

Empirical proof that faces lie in relatively low-dimensional subspace From “Probabilistic Visual Learning” by Moghaddam and Pentland

From “Probabilistic Visual Learning” by Moghaddam and Pentland

Empirical proof that faces lie in relatively low-dimensional subspace From “Probabilistic Visual Learning” by Moghaddam and Pentland

Basic Eigenface Recognition Find the face Normalize the image Project it into onto the eigenfaces Do nearest -neighbor classification Lots of variations that I won't get into

What makes it harder or easier? From “How Features of the Human Face Affect Recognition: a Statistical Comparison of Three Face Recognition Algorithms” by Givens et al.

Does it work in the real world?

Guessing Gender from Faces From Learning Gender with Support Faces by Moghaddam and Yeang