Download presentation
Presentation is loading. Please wait.
1
FACE RECOGNITION USING LAPLACIANFACES
Guidance By Mr.S.Mohandoss Lecturer (Department Of Computer Applications) Submited By Soumya Ranjan Panda Reg.No: (Master Of Computer Applications) Dr.M.G.R Educational and Research Institute University
2
Abstract We propose an appearance-based face recognition method called the Laplacianface approach. By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. By Soumya Ranjan Panda
3
Existing System The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D face image into the compact principal components of the feature space. This is called Eigen space projection. Eigen space is calculated by identifying the eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors).But the most of the algorithm considers somewhat global data patterns while recognition process. This will not yield accurate recognition system. Less accurate Does not deal with manifold structure It does not deal with biometric characteristics. By Soumya Ranjan Panda
4
Proposed System PCA and LDA aim to preserve the global structure. However, in many real-world applications, the local structure is more important. In this section, we describe Locality Preserving Projection (LPP) , a new algorithm for learning a locality preserving subspace. The complete derivation and theoretical justifications of LPP can be traced back to . LPP seeks to preserve the intrinsic geometry of the data and local structure. The objective function of LPP is as follows: LPP is a general method for manifold learning. It is obtained by finding the optimal linear approximations to the Eigen functions of the Laplace Beltrami operator on the manifold . Therefore, though it is still a linear technique, it seems to recover important aspects of the intrinsic nonlinear manifold structure by preserving local structure
5
Proposed System Based on LPP, we describe our Laplacianfaces method for face representation in a locality preserving subspace. In the face analysis and recognition problem, one is confronted with the difficulty that the matrix XDXT is sometimes singular. This stems from the fact that sometimes the number of images in the training set is much smaller than the number of pixels in each image. In such a case, the rank of XDXT is at most n, while XDXT is a matrix, Which implies that XDXT is singular to overcome the complication of a singular XDXT, we first project the image set to a PCA subspace so that the resulting matrix XDXT is nonsingular. Another consideration of using PCA as preprocessing is for noise reduction. This method, we call Laplacianfaces, can learn an optimal subspace for face representation and recognition.
6
Modules Pre processing PCA projection (Principal Component Analysis)
Constructing the nearest-neighbor graph Choosing the weights of neighboring pixel Recognize the image By Soumya Ranjan Panda
7
Modules Description Pre processing
In the preprocessing take the single gray image in 10 different directions And measure the points in 28 dimensions of each gray image By Soumya Ranjan Panda
8
PCA projection (Principal Component Analysis)
Modules Description PCA projection (Principal Component Analysis) We project the image set fxig into the PCA subspace by throwing away the smallest principal components. In our experiments, we kept 98 percent information in the sense of reconstruction error. For the sake of simplicity, we still use x to denote the images in the PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA. By Soumya Ranjan Panda
9
Constructing the nearest-neighbor graph
Modules Description Constructing the nearest-neighbor graph Let G denote a graph with n nodes. The ith node corresponds to the face image xi. We put an edge between nodes i and j if xi and xj are “close,” i.e., xj is among k nearest neighbors of xi, or xi is among k nearest neighbors of xj. The constructed nearest neighbor graph is an approximation of the local manifold structure. Note that here we do not use the "-neighborhood to construct the graph. This is simply because it is often difficult to choose the optimal “in the real-world applications, while k nearest-neighbor graph can be constructed more stably. The disadvantage is that the k nearest-neighbor search will increase the computational complexity of our algorithm. By Soumya Ranjan Panda
10
Choosing the weights of neighboring pixel
Modules Description Choosing the weights of neighboring pixel If node i and j are connected, put Sij ¼ e_;xi_xj k k2 t ; ð34Þ column (or row, since S is symmetric) sums of S, Dii ¼ Pj Sji. L ¼ D _ S is the Laplacian matrix. The ith row of matrix X is xi. Let w0;w1; ;wk_1 be the solutions of (35), ordered according to their eigenvalues, 0 _ _0 _ _1 _ _ _ _ _ _k_1. These eigenvalues are equal to or greater than zero because the matrices XLXT and XDXT are both symmetric and positive semidefinite. Thus, the embedding is as follows: x ! y ¼ WTx; ð36Þ W ¼ WPCAWLPP ; ð37Þ WLPP ¼ ½w0;w1; _ _ _ ;wk_1_; ð38Þ where y is a k-dimensional vector. W is the transformation matrix. This linear mapping best preserves the manifold’s estimated intrinsic geometry in a linear sense. The column vectors of W are the so-called Laplacianfaces. This principle is implemented with unsupervised learning concept with training and test data. By Soumya Ranjan Panda
11
Modules Description Recognize the image
Then measure the value as from test DIR which contain more gray image If it is match with any gray image then it recognize and show the image or else it not recognize
12
Data Flow Digram By Soumya Ranjan Panda
13
Use Case Diagram Input Gray Image Pre Processing Processing
Recognition
14
UML Diagram USER LDA Analysis Projection Canonical Construction OUTPUT
INPUT GRAY-SCALE IMAGE MEASURING OF POINTS IN 28 DIRECTIONS LDA Analysis Projection CONSTRUCTION OF K-NEAREST NEIGHBOURING ALGM Canonical Construction RECOGNIZE THE INPUT WITH EXISTING OUTPUT
15
Sequence Diagram User Processing Testing Image
Pre Processing Processing Input gray Image Give extracted gray level of pixel Testing Image Compare calculated principle points with test image If the image is recognized, then it show it to user
16
ScreenShots Main Window
Screen Shot: 1-Starting or main Window of face recognition system By Soumya Ranjan Panda
17
Screenshot: 2-Error message window shows when enter wrong source file name
By Soumya Ranjan Panda
18
Screenshot: 3-Inavalid Massage show when the input image is not recognized with saved image
By Soumya Ranjan Panda
19
Screenshot: 4-Face Recognition System after recognized the laplacianfaces
20
Conclusion A novel discriminative learning framework has been proposed for set classification based on canonical correlations. It is based on iterative learning which is theoretically and practically appealing. The proposed method has been evaluated on various object and object category recognition problems. The new technique facilitates effective discriminative learning over sets and exhibits impressive set classification accuracy. It significantly outperformed the KLD method representing a parametric distribution-based matching and kNN methods in both PCA/LDA subspaces as examples of nonparametric sample-based matching. It also largely outperformed the method based on a simple aggregation of canonical correlations. The proposed DCC method achieved not only better accuracy but also possesses many good properties, compared with CMSM/OSM methods. CMSM had to be optimized a posteriori by feature selection. By Soumya Ranjan Panda
21
Future Enhancement By Soumya Ranjan Panda
The manifold ways of face analysis (representation and recognition) are introduced in this project in order to detect the underlying nonlinear manifold structure in the manner of linear subspace learning. To the best of our knowledge, this is the first devoted work on face representation and recognition which explicitly considers the manifold structure. The manifold structure is approximated by the adjacency graph computed from the data points. Using the notion of the Laplacian of the graph, we then compute a transformation matrix which maps the face images into a face subspace. We call this the Laplacianfaces approach. The Laplacianfaces are obtained by finding the optimal linear approximations to the eigen functions of the Laplace Beltrami operator on the face manifold. This linear transformation optimally preserves local manifold structure. Theoretical analysis of the LPP algorithm and its connections to PCA and LDA are provided. Experimental results on PIE, Yale, and MSRA databases show the effectiveness of our method. One of the central problems in face manifold learning is to estimate the intrinsic dimensionality of the nonlinear face manifold, or, degrees of freedom. We know that the dimensionality of the manifold is equal to the dimensionality of the local tangent space. Therefore, one possibility is to estimate the dimensionality of the tangent space. By Soumya Ranjan Panda
22
Bibliography By Soumya Ranjan Panda
[1] Face Recognition Using Laplacianfaces Xiaofei He, Shuicheng Yan, Yuxiao Hu, Partha Niyogi, and Hong-Jiang Zhang, Fellow, IEEE IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 3, MARCH 2005 [2] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Fisher faces: Recognition Using Class Specific Linear Projection,”IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp , July 1997. [3] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,” Proc. Conf. Advances in Neural Information Processing System 15, 2001. [4] M. Belkin and P. Niyogi, “Using Manifold Structure for Partially Labeled Classification,” Proc. Conf. Advances in Neural Information Processing System 15, 2002. [5] M. Brand, “Charting a Manifold,” Proc. Conf. Advances in Neural Information Processing Systems, 2002. By Soumya Ranjan Panda
23
THANK U
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.