Download presentation
Presentation is loading. Please wait.
Published byShanon Rosamund Anderson Modified over 9 years ago
1
Pose, Illumination and Expression Invariant Pairwise Face-Similarity Measure via Doppelganger List Comparison Author: Florian Schroff, Tali Treibitz, David Kriegman, Serge Belongie Speaker :刘昕
2
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
3
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
4
Authors(1/4) Florian Schroff :
5
Authors(2/4) Tali Treibitz : Background : Ph.D. student in the Dept. of Electrical Engineering, Technion Publication: 三篇 CVPR ,一篇 PAMI
6
Authors(3/4) David J. Kriegman : Background: UCSD Professor of Computer Science & Engineering. UIUC Adjunct Professor of Computer Science and Beckman Institute. IEEE Transactions on Pattern Analysis & Machine Intelligence, Editor-in- Chief, 2005-2009
7
Authors(4/4) Serge J. Belongie: Background: Professor Computer Science and Engineering University of California, San Diego
8
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
9
Paper Information 文章出处 ICCV 2011 相关文献 Chunhui Zhu; Fang Wen; Jian Sun. A Rank-Order Distance based Clustering Algorithm for Face Tagging, CVPR2011 Lior Wolf ; Tal Hassner;Yaniv Taigman; One shot similarity kernel, ICCV09 Kumar, N.; Berg, A.C.; Belhumeur, P.N.; Nayar, S.K.Attribute and simile classifiers for face verification, CVPR09
10
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
11
Abstract(1/2) Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures.
12
Abstract(2/2) Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and Multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).
13
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
14
Motivation Learn a new distance metric D ’
15
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
16
Methods—Overview
17
Methods-Assumption This approach stems from the observation that ranked Doppelganger lists are similar for similar people(Even under different imaging conditions)
18
Methods-Set up Face database Using MultiPIE as a Face Library:
19
Methods-Finding Alike Calculating the list:
20
Methods-Compare List Calculating similarity:
21
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
22
Experiment on FacePix(across pose)
23
Experiment- Verification Across Large Variations of Pose
24
Experiment- on Multi-PIE The classification performance using ten fold cross-validation is 76:6% ± 2.0(both FPLBP and SSIM on direct image comparison perform near chance). To the best of our knowledge these are the first results reported across all pose, illumination and expression conditions on Multi-PIE.
25
Experiment on LFW ( 1/2) LFW 实验结果
26
Experiment on LFW ( 2/2)
27
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
28
Conclusion(1/2) To the best of our knowledge, we have shown the first verification results for face-similarity measures under truly unconstrained expression, illumination and pose, including full profile, on both Multi-PIE and FacePix. The advantages of the ranked Doppelganger lists become apparent when the two probe images depict faces in very different poses. Our method does not require explicit training and is able to cope with large pose ranges. It is straightforward to generalize our method to an even larger variety of imaging conditions, by adding further examples to the Library. No change in our algorithm is required, as its only assumption is that imaging conditions.
29
Conclusion(2/2) We expect that a great deal of improvement can be achieved by using this powerful comparison method as an additional feature in a complete verification or recognition pipeline where it can add the robustness that is required for face recognition across large pose ranges. Furthermore, we are currently exploring the use of ranked lists of identities in other classification domains.
30
Thanks for listening Xin Liu
31
Relative Attributes Author: Devi Parikh, Kristen Grauman Speaker :刘昕
32
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
33
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
34
Authors(1/2) Devi Parikh : http://ttic.uchicago.edu/~dparikh/ Background : Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC) Publication: L. Zitnick and D. Parikh The Role of Image Understanding in Segmentation IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012 (to appear) D. Parikh and L. Zitnick, Exploring Tiny Images: The Roles of Appearance and Contextual Information for Machine and Human Object Recognition,Pattern Analysis and Machine Intelligence (PAMI), 2012 (to appear) 一堆牛会,一堆牛期刊
35
Authors(2/2) Kristen Grauman : http://www.cs.utexas.edu/~grauman/ Background : Clare Boothe Luce Assistant Professor Microsoft Research New Faculty Fellow Department of Computer Science University of Texas at Austin Publication: 一堆 CVPR ,一堆 ICCV……
36
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
37
Paper Information 文章出处 ICCV 2011 Oral 获奖情况 Marr Prize!
38
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
39
Abstract(1/2) Human-nameable visual “attributes” can benefit various recognition tasks. However, existing techniques restrict these properties to categorical labels (for example, a person is ‘smiling’ or not, a scene is ‘dry’ or not), and thus fail to capture more general semantic relationships. We propose to model relative attributes. Given training data stating how object/scene categories relate according to different attributes, we learn a ranking function per attribute. The learned ranking functions predict the relative strength of each property in novel images.
40
Abstract(2/2) We then build a generative model over the joint space of attribute ranking outputs, and propose a novel form of zero-shot learning in which the supervisor relates the unseen object category to previously seen objects via attributes (for example, ‘bears are furrier than giraffes’). We further show how the proposed relative attributes enable richer textual descriptions for new images, which in practice are more precise for human interpretation. We demonstrate the approach on datasets of faces and natural scenes, and show its clear advantages over traditional binary attribute prediction for these new tasks
41
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
42
Motivation However, for a large variety of attributes, not only is this binary setting restrictive, but it is also unnatural. Why we model relative attributes?
43
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
44
Methods—Formulation(1/3 ) Ranking functions: For each attributeopen
45
Methods—Formulation(2/3 ) Objective Function: Compared to SVM:
46
Methods—Formulation(3/3 ) Margin and support vectors w m T X+b=0 Geometric margin :
47
Methods- ZeroShot Learning From Relationships ( 1/3) Overview :
48
Methods- ZeroShot Learning From Relationships ( 2/3) Image representation:
49
Methods- ZeroShot Learning From Relationships ( 3/3) Generative model:
50
Methods- Describing Images in Relative Terms ( 1/2) How to describe ?
51
Methods- Describing Images in Relative Terms ( 2/2) E.g. Relative (ours): More natural than tallbuilding Less natural than forest More open than tallbuilding Less open than coast Has more perspective than tallbuilding Binary (existing): Not natural Not open Has perspective
52
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
53
Experiment-Overview(1/2) OSR and PubFig:
54
Experiment-Overview(2/2) Baseline:
55
Experiment- Relative zero-shot Learning(1/4) Proposed Binary attributes Classifier score How does performance vary with more unseen categories? classical recognition problem binary ~ relative supervision 55
56
Experiment- Relative zero-shot Learning(2/4) << baseline supervisioncan give unique ordering on all classes 56
57
Experiment- Relative zero-shot Learning(3/4) 57
58
Experiment- Relative zero-shot Learning(4/4) Relative attributes jointly carve out space for unseen category 58
59
Experiment-Human study(2/2) 18 subjects Test cases: 10OSR, 20 PubFig 59
60
Outline Authors Paper Information Abstract Motivation Methods Experiment Conclusion
61
Conclusion We introduced relative attributes, which allow for a richer language of supervision and description than the commonly used categorical (binary) attributes. We presented two novel applications: zero-shot learning based on relationships and describing images relative to other images or categories. Through extensive experiments as well as a human subject study, we clearly demonstrated the advantages of our idea. Future work includes exploring more novel applications of relative attributes, such as guided search or interactive learning, and automatic discovery of relative attributes.
62
Thanks for listening Xin Liu
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.