FACE RECOGNITION USING LAPLACIANFACES

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1.
AGE ESTIMATION: A CLASSIFICATION PROBLEM HANDE ALEMDAR, BERNA ALTINEL, NEŞE ALYÜZ, SERHAN DANİŞ.
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Three Algorithms for Nonlinear Dimensionality Reduction Haixuan Yang Group Meeting Jan. 011, 2005.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
CSE 185 Introduction to Computer Vision Face Recognition.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
Optimal Dimensionality of Metric Space for kNN Classification Wei Zhang, Xiangyang Xue, Zichen Sun Yuefei Guo, and Hong Lu Dept. of Computer Science &
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
A Convergent Solution to Tensor Subspace Learning.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
2D-LDA: A statistical linear discriminant analysis for image matrix
Ultra-high dimensional feature selection Yun Li
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Spectral Methods for Dimensionality
Nonlinear Dimensionality Reduction
Intrinsic Data Geometry from a Training Set
Shuang Hong Yang College of Computing, Georgia Tech, USA Hongyuan Zha
LECTURE 11: Advanced Discriminant Analysis
University of Ioannina
LECTURE 10: DISCRIMINANT ANALYSIS
Recognition with Expression Variations
Unsupervised Riemannian Clustering of Probability Density Functions
کاربرد نگاشت با حفظ تنکی در شناسایی چهره
Dimensionality reduction
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Machine Learning Basics
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Machine Learning Dimensionality Reduction
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
PCA vs ICA vs LDA.
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
Goodfellow: Chapter 14 Autoencoders
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
Feature space tansformation methods
CS4670: Intro to Computer Vision
LECTURE 09: DISCRIMINANT ANALYSIS
Announcements Project 2 artifacts Project 3 due Thursday night
Facial Recognition as a Pattern Recognition Problem
Using Manifold Structure for Partially Labeled Classification
Marios Mattheakis and Pavlos Protopapas
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

FACE RECOGNITION USING LAPLACIANFACES Guidance By Mr.S.Mohandoss Lecturer (Department Of Computer Applications) Submited By Soumya Ranjan Panda Reg.No:081522101606 (Master Of Computer Applications) Dr.M.G.R Educational and Research Institute University

Abstract We propose an appearance-based face recognition method called the Laplacianface approach. By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. By Soumya Ranjan Panda

Existing System The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D face image into the compact principal components of the feature space. This is called Eigen space projection. Eigen space is calculated by identifying the eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors).But the most of the algorithm considers somewhat global data patterns while recognition process. This will not yield accurate recognition system. Less accurate Does not deal with manifold structure It does not deal with biometric characteristics. By Soumya Ranjan Panda

Proposed System PCA and LDA aim to preserve the global structure. However, in many real-world applications, the local structure is more important. In this section, we describe Locality Preserving Projection (LPP) , a new algorithm for learning a locality preserving subspace. The complete derivation and theoretical justifications of LPP can be traced back to . LPP seeks to preserve the intrinsic geometry of the data and local structure. The objective function of LPP is as follows: LPP is a general method for manifold learning. It is obtained by finding the optimal linear approximations to the Eigen functions of the Laplace Beltrami operator on the manifold . Therefore, though it is still a linear technique, it seems to recover important aspects of the intrinsic nonlinear manifold structure by preserving local structure

Proposed System Based on LPP, we describe our Laplacianfaces method for face representation in a locality preserving subspace. In the face analysis and recognition problem, one is confronted with the difficulty that the matrix XDXT is sometimes singular. This stems from the fact that sometimes the number of images in the training set is much smaller than the number of pixels in each image. In such a case, the rank of XDXT is at most n, while XDXT is a matrix, Which implies that XDXT is singular to overcome the complication of a singular XDXT, we first project the image set to a PCA subspace so that the resulting matrix XDXT is nonsingular. Another consideration of using PCA as preprocessing is for noise reduction. This method, we call Laplacianfaces, can learn an optimal subspace for face representation and recognition.

Modules Pre processing PCA projection (Principal Component Analysis) Constructing the nearest-neighbor graph Choosing the weights of neighboring pixel Recognize the image By Soumya Ranjan Panda

Modules Description Pre processing In the preprocessing take the single gray image in 10 different directions And measure the points in 28 dimensions of each gray image By Soumya Ranjan Panda

PCA projection (Principal Component Analysis) Modules Description PCA projection (Principal Component Analysis) We project the image set fxig into the PCA subspace by throwing away the smallest principal components. In our experiments, we kept 98 percent information in the sense of reconstruction error. For the sake of simplicity, we still use x to denote the images in the PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA.   By Soumya Ranjan Panda

Constructing the nearest-neighbor graph Modules Description Constructing the nearest-neighbor graph   Let G denote a graph with n nodes. The ith node corresponds to the face image xi. We put an edge between nodes i and j if xi and xj are “close,” i.e., xj is among k nearest neighbors of xi, or xi is among k nearest neighbors of xj. The constructed nearest neighbor graph is an approximation of the local manifold structure. Note that here we do not use the "-neighborhood to construct the graph. This is simply because it is often difficult to choose the optimal “in the real-world applications, while k nearest-neighbor graph can be constructed more stably. The disadvantage is that the k nearest-neighbor search will increase the computational complexity of our algorithm. By Soumya Ranjan Panda

Choosing the weights of neighboring pixel Modules Description Choosing the weights of neighboring pixel  If node i and j are connected, put Sij ¼ e_;xi_xj k k2 t ; ð34Þ column (or row, since S is symmetric) sums of S, Dii ¼ Pj Sji. L ¼ D _ S is the Laplacian matrix. The ith row of matrix X is xi. Let w0;w1; . . . ;wk_1 be the solutions of (35), ordered according to their eigenvalues, 0 _ _0 _ _1 _ _ _ _ _ _k_1. These eigenvalues are equal to or greater than zero because the matrices XLXT and XDXT are both symmetric and positive semidefinite. Thus, the embedding is as follows: x ! y ¼ WTx; ð36Þ W ¼ WPCAWLPP ; ð37Þ WLPP ¼ ½w0;w1; _ _ _ ;wk_1_; ð38Þ where y is a k-dimensional vector. W is the transformation matrix. This linear mapping best preserves the manifold’s estimated intrinsic geometry in a linear sense. The column vectors of W are the so-called Laplacianfaces. This principle is implemented with unsupervised learning concept with training and test data. By Soumya Ranjan Panda

Modules Description Recognize the image Then measure the value as from test DIR which contain more gray image If it is match with any gray image then it recognize and show the image or else it not recognize  

Data Flow Digram By Soumya Ranjan Panda

Use Case Diagram Input Gray Image Pre Processing Processing Recognition

UML Diagram USER LDA Analysis Projection Canonical Construction OUTPUT INPUT GRAY-SCALE IMAGE MEASURING OF POINTS IN 28 DIRECTIONS LDA Analysis Projection CONSTRUCTION OF K-NEAREST NEIGHBOURING ALGM Canonical Construction RECOGNIZE THE INPUT WITH EXISTING OUTPUT

Sequence Diagram User Processing Testing Image Pre Processing Processing Input gray Image Give extracted gray level of pixel Testing Image Compare calculated principle points with test image If the image is recognized, then it show it to user

ScreenShots Main Window Screen Shot: 1-Starting or main Window of face recognition system By Soumya Ranjan Panda

Screenshot: 2-Error message window shows when enter wrong source file name By Soumya Ranjan Panda

Screenshot: 3-Inavalid Massage show when the input image is not recognized with saved image By Soumya Ranjan Panda

Screenshot: 4-Face Recognition System after recognized the laplacianfaces

Conclusion A novel discriminative learning framework has been proposed for set classification based on canonical correlations. It is based on iterative learning which is theoretically and practically appealing. The proposed method has been evaluated on various object and object category recognition problems. The new technique facilitates effective discriminative learning over sets and exhibits impressive set classification accuracy. It significantly outperformed the KLD method representing a parametric distribution-based matching and kNN methods in both PCA/LDA subspaces as examples of nonparametric sample-based matching. It also largely outperformed the method based on a simple aggregation of canonical correlations. The proposed DCC method achieved not only better accuracy but also possesses many good properties, compared with CMSM/OSM methods. CMSM had to be optimized a posteriori by feature selection. By Soumya Ranjan Panda

Future Enhancement By Soumya Ranjan Panda The manifold ways of face analysis (representation and recognition) are introduced in this project in order to detect the underlying nonlinear manifold structure in the manner of linear subspace learning. To the best of our knowledge, this is the first devoted work on face representation and recognition which explicitly considers the manifold structure. The manifold structure is approximated by the adjacency graph computed from the data points. Using the notion of the Laplacian of the graph, we then compute a transformation matrix which maps the face images into a face subspace. We call this the Laplacianfaces approach. The Laplacianfaces are obtained by finding the optimal linear approximations to the eigen functions of the Laplace Beltrami operator on the face manifold. This linear transformation optimally preserves local manifold structure. Theoretical analysis of the LPP algorithm and its connections to PCA and LDA are provided. Experimental results on PIE, Yale, and MSRA databases show the effectiveness of our method. One of the central problems in face manifold learning is to estimate the intrinsic dimensionality of the nonlinear face manifold, or, degrees of freedom. We know that the dimensionality of the manifold is equal to the dimensionality of the local tangent space. Therefore, one possibility is to estimate the dimensionality of the tangent space. By Soumya Ranjan Panda

Bibliography By Soumya Ranjan Panda [1] Face Recognition Using Laplacianfaces Xiaofei He, Shuicheng Yan, Yuxiao Hu, Partha Niyogi, and Hong-Jiang Zhang, Fellow, IEEE IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 3, MARCH 2005 [2] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Fisher faces: Recognition Using Class Specific Linear Projection,”IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997. [3] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,” Proc. Conf. Advances in Neural Information Processing System 15, 2001. [4] M. Belkin and P. Niyogi, “Using Manifold Structure for Partially Labeled Classification,” Proc. Conf. Advances in Neural Information Processing System 15, 2002. [5] M. Brand, “Charting a Manifold,” Proc. Conf. Advances in Neural Information Processing Systems, 2002. By Soumya Ranjan Panda

THANK U