EigenFaces.

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Face Recognition and Biometric Systems Eigenfaces (2)
Structured Sparse Principal Component Analysis Reading Group Presenter: Peng Zhang Cognitive Radio Institute Friday, October 01, 2010 Authors: Rodolphe.
Machine Learning Lecture 8 Data Processing and Representation
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Face Recognition Method of OpenCV
As applied to face recognition.  Detection vs. Recognition.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Components Analysis Babak Rasolzadeh Tuesday, 5th December 2006.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Principal Component Analysis
Principal Component Analysis IML Outline Max the variance of the output coordinates Optimal reconstruction Generating data Limitations of PCA.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 9(b) Principal Components Analysis Martin Russell.
Face Recognition Jeremy Wyatt.
Face Recognition Using Eigenfaces
Principal Component Analysis Barnabás Póczos University of Alberta Nov 24, 2009 B: Chapter 12 HRF: Chapter 14.5.
PCA Channel Student: Fangming JI u Supervisor: Professor Tom Geoden.
Principal Component Analysis. Consider a collection of points.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Eigenfaces for Recognition Student: Yikun Jiang Professor: Brendan Morris.
SVD(Singular Value Decomposition) and Its Applications
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Summarized by Soo-Jin Kim
Principle Component Analysis Presented by: Sabbir Ahmed Roll: FH-227.
Eigenface Dr. Hiren D. Joshi Dept. of Computer Science Rollwala Computer Center Gujarat University.
Principal Components Analysis on Images and Face Recognition
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Last update Heejune Ahn, SeoulTech
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Point Distribution Models Active Appearance Models Compilation based on: Dhruv Batra ECE CMU Tim Cootes Machester.
Feature Selection and Dimensionality Reduction. “Curse of dimensionality” – The higher the dimensionality of the data, the more data is needed to learn.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Obama and Biden, McCain and Palin Face Recognition Using Eigenfaces Justin Li.
Feature Extraction 主講人:虞台文.
Presented by: Muhammad Wasif Laeeq (BSIT07-1) Muhammad Aatif Aneeq (BSIT07-15) Shah Rukh (BSIT07-22) Mudasir Abbas (BSIT07-34) Ahmad Mushtaq (BSIT07-45)
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Principal Components Analysis ( PCA)
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
Dimensionality Reduction
PRINCIPAL COMPONENT ANALYSIS (PCA)
Lecture 8:Eigenfaces and Shared Features
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Principal Component Analysis (PCA)
Principal Component Analysis
Singular Value Decomposition
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
Principal Component Analysis (PCA)
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Eigenfaces for recognition (Turk & Pentland)
Recitation: SVD and dimensionality reduction
CSSE463: Image Recognition Day 25
CS4670: Intro to Computer Vision
CSSE463: Image Recognition Day 25
Principal Component Analysis
Facial Recognition as a Pattern Recognition Problem
Presentation transcript:

EigenFaces

(squared) Variance A measure of how "spread out" a sequence of numbers are. λ= 1 𝑛 𝑖=1 𝑛 (𝑥 𝑖 −μ ) 2 μ= 1 𝑛 𝑖=1 𝑛 𝑥 𝑖

Covariance matrix A measure of correlation between data elements. Example: Data set of size n Each data element has 3 fields: Height Weight Birth date

Covariance [Collect data from class]

Covariance λ 11 λ 12 … λ 1𝑛 λ 21 λ 22 … λ 2𝑛 ⋮ λ 𝑛1 ⋮ λ 𝑛2 ⋱ ⋮ … λ 𝑛𝑛 λ 𝑖𝑗 = 1 𝑛 𝑘=1 𝑛 ( 𝑥 𝑖 − μ 𝑖 )( 𝑥 𝑗 − μ 𝑗 )

Covariance The diagonals are the variance of that feature Non-diagonals are a measure of correlation High-positive == positive correlation one goes up, other goes up Low-negative == negative correlation one goes up, other goes down Near-zero == no correlation unrelated [How high depends on the range of values]

Covariance You can calculate it with a matrix: Raw Matrix is a p x q matrix p features q samples Convert to mean-deviation form Calculate the average sample Subtract this from all samples. Multiply MeanDev (a p x q matrix) by its transpose (a q x p matrix) Multiply by 1/n to get the covariance matrix.

Covariance [Calculate our covariance matrix]

EigenSystems An EigenSystem is: Such that: 𝐴 𝑣 =λ 𝑣 A vector 𝒗 (the eigenvector) A scalar λ (the eigenvalue) Such that: 𝐴 𝑣 =λ 𝑣 (the zero vector isn't an eigenvector) In general, not all matrices have eigenvectors.

EigenSystems and PCA When you calculate the eigen-system of an n x n Covariance matrix you get: n eigenvectors (each of dimension n) n matching eigenvalues The biggest eigen-value "explains" the largest amount of variance in the data set.

Example Say we have a 2d data set First eigen-pair (v1 = [0.8, 0.6], λ=800.0) Second eigen-pair (v2 = [-0.6, 0.8], λ=100.0) 8x as much variance is along v1 as v2. v1 and v2 are perpendicular to each other v1 and v2 define a new set of basis vectors for this data set. v2 v1

Conversions between basis vectors Let's take one data point… Let's say it is [-1.5, 0.4] in "world units" Project it onto v1 and v2 to get the coordinates relative to (v1, v2 unit-length basis vectors) 𝑛𝑒𝑤𝐶𝑜𝑜𝑟𝑑= 𝑃 ● 𝑣1 𝑃 ● 𝑣2 𝑛𝑒𝑤𝐶𝑜𝑜𝑟𝑑≈ −0.9 0.9 v2 v1 To convert back to "world units": 𝑤𝑜𝑟𝑙𝑑𝐶𝑜𝑜𝑟𝑑= 𝑛𝑒𝑤𝐶𝑜𝑜𝑟𝑑 0 ∗ 𝑣1 𝑛𝑒𝑤𝐶𝑜𝑜𝑟𝑑 1 ∗ 𝑣2

PCA and compression Example: n (the number of features) is high (~100) Most of the variance is captured by 3 eigen-vectors. You can throw out the other 97 eigen-vectors. You can represent most of the data for each sample using just 3 numbers per sample (instead of 100) For a large data set, this can be huge.

EigenFaces Collect database images Subject looking straight ahead, no emotion, neutral lighting. Crop: on the top include all of the eyebrows on the bottom include just to the chin on the sides, include all of the face. Size to 32x32, grayscale (a limit of the eigen-solver) In code, include a way to convert to (and from) a VectorN.

EigenFaces, cont. Calculate the average image Just pixel (Vector element) by element.

EigenFaces, cont. Calculate the Covariance matrix Calculate the EigenSystem Keep the eigen-pairs that preserve n% of the data variance (98% or so) Your Eigen-database is the 32x32 average image and the (here) 8 32x32 eigen-face images.

Eigenfaces, cont. Represent each of your faces as a q-value vector (q = # of eigenfaces). Subtract the average and project onto the q eigenfaces The images I'm showing here are the original image and the 8-value "eigen-coordinates 9.08 187.4 -551.7 -114.4 -328.8 29.2 -371.9 -108.0 1277.0 150.9 -133.6 249.3 338.9 13.14 16.8 3.35

EigenFaces, cont. (for demonstration of compression) You can reconstruct a compressed image by: Start with a copy of the average image, X Repeat for each eigenface: Add the eigen-coord * eigenface to X Here are the reconstructions of the 2 images on the last slide: Original Reconstruction

EigenFaces, cont. Facial Recognition Take a novel image (same size as database images) Using the eigenfaces computed earlier (this novel image is usually NOT part of this computation), compute eigen-coordinates. Calculate the q-dimensional distance (pythagorean theorem in q-dimensions) between this image and each database image. The database image with the smallest distance is your best match.