Download presentation
Presentation is loading. Please wait.
Published byOlivia Morton Modified over 9 years ago
1
单击此处编辑母版标题样式 Class-oriented Regression Embedding 报告人:陈 燚 2011 年 8 月 25 日
2
单击此处编辑母版标题样式 报告提纲 1. Background 2. Related Works 2.1 Linear Regression-based Classification 2.2 Neighborhood Preserving Embedding & Sparsity Preserving Projections 3. Class-oriented Regression Embedding 4. Experiments
3
单击此处编辑母版标题样式 1. Background
4
单击此处编辑母版标题样式 Background The minimum reconstruction error criterion is widely used in the recent progress of subspace classification, such as in SRC and LRC J. Wright, A. Yang, S. Sastry, Y. Ma, Robust face recognition via sparse representation, IEEE Trans. Pattern Anal. Mach. Intell. 31 (2), 210–227, 2009. I. Naseem, R. Togneri, and M. Bennamoun. Linear Regression for Face Recognition. IEEE Trans. on PAMI, 2010.
5
单击此处编辑母版标题样式 A brief review SRC: LRC Classification rule: is the coefficients of the i th class
6
单击此处编辑母版标题样式 Nearest Space Classifiers Definition: The nearest subspace of a given sample Measurement: Reconstruction Error Stan Z. Li: Face Recognition Based on Nearest Linear Combinations. CVPR 1998: 839-844
7
单击此处编辑母版标题样式 2.Related Works
8
单击此处编辑母版标题样式 LRC 最小二乘法 线性子空间假设 第 i 类的重 构结果 样本的类别即是最小重 构误差的类
9
单击此处编辑母版标题样式 NPE & SPP Objective Function The difference between NPE and SPP the reconstructive strategy. NPE: KNN SPP: Global Sparse Xiaofei He, Deng Cai, Shuicheng Yan, and HongJiang Zhang. Neighborhood preserving embedding, ICCV, 1208–1213, 2005. Qiao, L.S., Chen, S.C., Tan, X.Y., Sparsity preserving projections with applications to face recognition. Pattern Recognition 43 (1), 331–341, 2010.
10
单击此处编辑母版标题样式 3. Class-oriented Regression Embedding
11
单击此处编辑母版标题样式 Assumption of SRC and LRC A given sample belongs to the class with minimum reconstruction error Problem: Does this assumption holds well in real world applications?
12
单击此处编辑母版标题样式 Examples The training face images
13
单击此处编辑母版标题样式 Examples 3 20 14 17
14
单击此处编辑母版标题样式 Motivation LRC uses downsampled images directly for classification, which is not optimal for LRC. We aim to find the subspace that conforms to the assumption. In this low-dimensional subspace, A sample can be best represented by its intra-class samples.
15
单击此处编辑母版标题样式 Algorithm Objective function: To avoid degenerate solutions, we constraint Then we have: Where and
16
单击此处编辑母版标题样式 Example Reconstructive Strategy of CRE NPE and SPP CRE NPE SPP
17
单击此处编辑母版标题样式 SSS problem is singular in SSS case. We apply PCA to reduce the dimensionality of the origin sample to avoid SSS problem.
18
单击此处编辑母版标题样式 Ridge Regression-based Classification 最小二乘法 线性子空间假设 第 i 类的重 构结果 样本的类别即是最小重 构误差的类 May be singular
19
Solution: Ridge Regression
20
单击此处编辑母版标题样式 Steps Input: Column sample matrix Output: Transform matrix Step 1: Project the training samples onto a PCA subspace: Step 2: Construct the global reconstruction coefficient matrix using. Step 3: Solve the generalized eigenvectors of corresponding to the first d smallest eigenvalues.
21
单击此处编辑母版标题样式 4. Experiments
22
单击此处编辑母版标题样式 Experiments on YALE-B Experiments on the YALE-B database Method5 Train10 Train20 Train PCA+NNC36.1(176)52.7(362)68.9(727) LDA+NNC73.4(37)87.0(37)91.3(37) NPE+NNC65.7(77)79.0(93)82.7(152) SPP+NNC60.2(51)76.5(72)84.4(91) CRE+NNC66.3(43)58.6(112)54.3(161)
23
Method5 Train10 Train20 Train PCA+ SRC72.4(91)85.8(153)92.6(192) LDA+ SRC72.7(37)84.6(35)91.7(37) NPE+ SRC68.8(51)81.5(80)90.2(102) SPP+ SRC69.2(51)83.4(63)92.0(82) CRE+ SRC78.6(65)89.5(79)93.4(90)
24
Method5 Train10 Train20 Train PCA+LRC59.8(101)82.7(148)85.6(190) LDA+LRC65.3(37)84.1(37)87.4(37) NPE+LRC70.4(112)82.7(205)85.3(240) SPP+LRC72.5(51)86.0(72)91.3(91) CRE+LRC80.7(43)92.4(83)97.2(161) LRC58.081.790.9
25
Comparisons of recognition rates using CRE plus NNC/LRC/SRC on the YALE-B database with 10 and 20 training samples each class respectively.
26
Comparisons of recognition rates using 5 methods plus LRC on the YALE-B database with 10 and 20 training samples each class respectively.
27
The recognition rates of CRE plus LRC, SPP plus SRC and direct LRC on the YALE-B databases with 20 training samples of each class.
28
单击此处编辑母版标题样式 Experiments on FERET Method3 Train4 Train5 Train6 Train PCA+NNC29.4(203)33.0(242)38.6(253)42.6(286) LDA+NNC61.9(33)65.8(199)69.9(199)75.3(199) NPE+NNC58.6(22)62.3(51)66.2(46)70.1(72) SPP+NNC36.9(146)43.2(151)48.6(176)50.2(181) CRE+NNC64.2(53)69.4(55)73.0(71)77.6(82)
29
Method3 Train4 Train5 Train6 Train PCA+ SRC53.8(122)62.8(118)68.7(121)73.4(134) LDA+ SRC66.7(33)74.6(26)80.1(36)86.4(38) NPE+ SRC64.3(42)70.7(54)76.4(60)82.6(68) SPP+ SRC52.9(151)63.7(172)69.8(185)74.6(198) CRE+ SRC75.6(32)81.4(37)86.3(43)91.6(46)
30
Method3 Train4 Train5 Train6 Train PCA+LRC40.7(298)48.6(312)52.0(335)54.5(352) LDA+LRC65.4(39)73.4(30)78.6(51)84.1(62) NPE+LRC61.3(40)68.7(65)72.4(92)77.4(77) SPP+LRC50.2(146)58.7(151)64.2(176)68.0(181) CRE+LRC85.4(53)90.2(55)94.1(71)97.9(82) LRC42.050.655.461.2
31
Comparisons of recognition rates using CRE plus NNC/LRC/SRC on the FERET database with 5 and 6 training samples each class respectively.
32
A comparison of recognition rates using 5 methods plus LRC on the FERET database with 6 training samples each class respectively.
33
The recognition rates of CRE plus LRC, SPP plus SRC and direct LRC on the FERET databases with 6 training samples of each class.
34
单击此处编辑母版标题样式 Experiments on Cenparmi Method Classifier PCALDANPESPPCRE NNC87.6(30)88.2(9)85.8(19)86.9(33)87.6(33) SRC90.0(41)82.6(9)89.6(21)92.1(31)93.6(31) RRC92.1(32)84.8(9)92.4(23)88.1(33)95.6(38)
35
The recognition rate curves of PCA, LDA, NPE, SPP and LSPP plus RRC on the CENPARMI handwritten numeral database. The recognition rate curves of CRE plus RRC/SRC/NNC versus the dimensions on the CENPARMI handwritten numeral database.
36
单击此处编辑母版标题样式 Comparisons
37
单击此处编辑母版标题样式
38
谢谢! 报告人:陈 燚 2011 年 8 月 25 日
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.