Feature extraction using fuzzy complete linear discriminant analysis The reporter : Cui Yan 2012. 4. 26.

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

Mykola Pechenizkiy, Seppo Puuronen Department of Computer Science University of Jyväskylä Finland Alexey Tsymbal Department of Computer Science Trinity.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Face Recognition By Sunny Tang.
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Face Recognition and Biometric Systems
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
Learning of Pseudo-Metrics. Slide 1 Online and Batch Learning of Pseudo-Metrics Shai Shalev-Shwartz Hebrew University, Jerusalem Joint work with Yoram.
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Reduced Support Vector Machine
Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Chapter 5 Part II 5.3 Spread of Data 5.4 Fisher Discriminant.
Bioinformatics Challenge  Learning in very high dimensions with very few samples  Acute leukemia dataset: 7129 # of gene vs. 72 samples  Colon cancer.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
Face Recognition: An Introduction
Foundation of High-Dimensional Data Visualization
1 Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data Presented by: Tun-Hsiang Yang.
Dimensionality reduction Usman Roshan CS 675. Supervised dim reduction: Linear discriminant analysis Fisher linear discriminant: –Maximize ratio of difference.
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Ron Yanovich & Guy Peled 1. Contents Grayscale coloring background Luminance / Luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor.
Summarized by Soo-Jin Kim
Training Database Step 1 : In general approach of PCA, each image is divided into nxn blocks or pixels. Then all pixel values are taken into a single one.
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
A Bootstrap Interval Estimator for Bayes’ Classification Error Chad M. Hawes a,b, Carey E. Priebe a a The Johns Hopkins University, Dept of Applied Mathematics.
Chapter 12 Object Recognition Chapter 12 Object Recognition 12.1 Patterns and pattern classes Definition of a pattern class:a family of patterns that share.
Optimal Dimensionality of Metric Space for kNN Classification Wei Zhang, Xiangyang Xue, Zichen Sun Yuefei Guo, and Hong Lu Dept. of Computer Science &
Discriminant Analysis
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Prototype Classification Methods Fu Chang Institute of Information Science Academia Sinica ext. 1819
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Dimensionality reduction
Principal Component Analysis and Linear Discriminant Analysis for Feature Reduction Jieping Ye Department of Computer Science and Engineering Arizona State.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
2D-LDA: A statistical linear discriminant analysis for image matrix
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
Linear Classifiers Dept. Computer Science & Engineering, Shanghai Jiao Tong University.
Fuzzy Pattern Recognition. Overview of Pattern Recognition Pattern Recognition Procedure Feature Extraction Feature Reduction Classification (supervised)
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Feature Extraction 主講人:虞台文.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 09: Discriminant Analysis Objectives: Principal.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Zhiming Liu and Chengjun Liu, IEEE. Introduction Algorithms and procedures Experiments Conclusion.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Principal Component Analysis (PCA)
LECTURE 10: DISCRIMINANT ANALYSIS
Dimensionality reduction
Discrimination and Classification
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Pattern Recognition PhD Course.
Singular Value Decomposition
Classification Discriminant Analysis
Introduction PCA (Principal Component Analysis) Characteristics:
Feature space tansformation methods
X.2 Linear Discriminant Analysis: 2-Class
Generally Discriminant Analysis
LECTURE 09: DISCRIMINANT ANALYSIS
Presentation transcript:

Feature extraction using fuzzy complete linear discriminant analysis The reporter : Cui Yan

The report outlines 1.The fuzzy K-nearest neighbor classifier (FKNN) 2.The fuzzy complete linear discriminant analysis 3.Expriments

The Fuzzy K-nearest neighbor classifier (FKNN)

Each sample should be classified similarly to its surrounding samples, therefore, a unknown sample could be predicated by considering the classification of its nearest neighbor samples. The K-nearest neighbor classifier (KNN)

KNN tries to classify an unknown sample based on its k-known classification neighbors.

FKNN Given a sample set, a fuzzy M -class partition of these vectors specify the membership degrees of each sample corres- ponding to each class. The membership degree of a training vector to o each of M classes is specified by, which is computed by the following steps:

Step 1: Compute the distance matrix between pairs of feature vectors in the training. Step 2: Set diagonal elements of this matrix to infinity (practically place large numeric values there).

Step 3: Sort the distance matrix (treat each of its column separately) in an ascending order. Collect the class labels of the patterns located in the closest neigh- borhood of the pattern under consi- deration (as we are concerned with k neighbors, this returns a list of k integers).

Step 4: Compute the membership grade to class i for j-th pattern using the expression proposed in [1]. [1] J.M. Keller, M.R. Gray, J.A. Givens, A fuzzy k-nearest neighbor algorithm, IEEE Trans. Syst.Man Cybernet. 1985, 15(4):

A example for FKNN

Set k=3

The fuzzy complete linear discriminant analysis

For the training set, we define the i-th class mean by combining the fuzzy membership degree as And the total mean as (1) (2)

Incorporating the fuzzy membership degree, the between-class, the within-class and the total class fuzzy scatter matrix of samples can be defined as (3)

step1: Calculate the membership degree matrix U by the FKNN algorithm. step 2: According toEqs.(1)-(3) work out the between-class, within-class and total class fuzzy scatter matrices. step 3: Work out the orthogonal eigenvectors p1,..., pl of the total class fuzzy scatter matrix corresponding to positive eigenvalues. Algorithm of the fuzzy complete linear analysis

step 4: Let P = (p1,..., pl) and, work out the orthogonal eigenvectors g1,..., gr of correspending the zero eigenvalues. step 5: Let P1 = (g1,..., gr) and, work out the orthogonal eigenvectors v1,..., vr of, calculate the irregular discriminant vectors by.

step 6: Work out the orthogonal eigenvectors q1,…, qs of correspending the non-zero eigenvalues. step 7: Let P2 = (q1,…, qs) and, work out the optimal discriminant vectors vr+1,..., vr+s by the Fisher LDA, calculate the regular discriminant vectors by. step 8: (Recognition): Project all samples into the obtained optimal discriminant vectors and classify.

Experiments

We compare Fuzzy-CLDA with CLDA, UWLDA, FLDA, Fuzzy Fisherface, FIFDA on 3 different data sets from the UCI data sources. The characteristics of the three datasets can be found from ( All data sets are randomly split to the train set and test set with the ratio 1:4. Experiments are repeated 25 times to obtain mean prediction error rate as a performance measure, NCC is adopted to classify the test samples by using L2 norm.

Thanks !