Domingo Mery Department of Computer Science

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Three things everyone should know to improve object retrieval
Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig
Face Alignment with Part-Based Modeling
1 Machine Learning: Lecture 7 Instance-Based Learning (IBL) (Based on Chapter 8 of Mitchell T.., Machine Learning, 1997)
Automatic Feature Extraction for Multi-view 3D Face Recognition
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Robust Object Tracking via Sparsity-based Collaborative Model
Large-Scale, Real-World Face Recognition in Movie Trailers Week 2-3 Alan Wright (Facial Recog. pictures taken from Enrique Gortez)
Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots Chao-Yeh Chen and Kristen Grauman University of Texas at Austin.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
Face Recognition Based on 3D Shape Estimation
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
1 A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions Zhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang Reported.
Hand Signals Recognition from Video Using 3D Motion Capture Archive Tai-Peng Tian Stan Sclaroff Computer Science Department B OSTON U NIVERSITY I. Introduction.
Oral Defense by Sunny Tang 15 Aug 2003
Partial Face Recognition
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
Gender and 3D Facial Symmetry: What’s the Relationship ? Xia BAIQIANG (University Lille1/LIFL) Boulbaba Ben Amor (TELECOM Lille1/LIFL) Hassen Drira (TELECOM.
Person-Specific Domain Adaptation with Applications to Heterogeneous Face Recognition (HFR) Presenter: Yao-Hung Tsai Dept. of Electrical Engineering, NTU.
Human abilities Presented By Mahmoud Awadallah 1.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Multimodal Information Analysis for Emotion Recognition
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao.
80 million tiny images: a large dataset for non-parametric object and scene recognition CS 4763 Multimedia Systems Spring 2008.
Representations for object class recognition David Lowe Department of Computer Science University of British Columbia Vancouver, Canada Sept. 21, 2006.
Transfer Learning for Image Classification Group No.: 15 Group member : Feng Cai Sauptik Dhar Sauptik.
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn Paper presentation Kin-chung (Ryan) Wong 2006/7/27.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
模式识别国家重点实验室 中国科学院自动化研究所 National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences Matching Tracking Sequences Across.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
MACHINE VISION GROUP Face image mapping from NIR to VIS Jie Chen Machine Vision Group
Human Activity Recognition, Biometrics and Cybersecurity Mohamed Abdel-Mottaleb, Ph.D. Image Processing and Computer Vision Department of Electrical and.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Learning to Compare Image Patches via Convolutional Neural Networks
Deeply learned face representations are sparse, selective, and robust
Guillaume-Alexandre Bilodeau
Bag-of-Visual-Words Based Feature Extraction
CLASSIFICATION OF TUMOR HISTOPATHOLOGY VIA SPARSE FEATURE LEARNING Nandita M. Nayak1, Hang Chang1, Alexander Borowsky2, Paul Spellman3 and Bahram Parvin1.
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Recognition with Expression Variations
Can Computer Algorithms Guess Your Age and Gender?
Systems Biology for Translational Medicine
Face Recognition and Feature Subspaces
Fast Preprocessing for Robust Face Sketch Synthesis
Hybrid Features based Gender Classification
Recognition: Face Recognition
Recovery from Occlusion in Deep Feature Space for Face Recognition
State-of-the-art face recognition systems
By: Kevin Yu Ph.D. in Computer Engineering
Deep Face Recognition Omkar M. Parkhi Andrea Vedaldi Andrew Zisserman
Hu Li Moments for Low Resolution Thermal Face Recognition
Categorization by Learning and Combing Object Parts
Domingo Mery Department of Computer Science
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Machine Learning: UNIT-4 CHAPTER-1
Housam Babiker, Randy Goebel and Irene Cheng
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Progress Report Alvaro Velasquez.
Presentation transcript:

Recognition of Faces and Facial Attributes using Accumulative Local Sparse Representations Domingo Mery Department of Computer Science Universidad Católica de Chile Sandipan Banerjee Department of Computer Science & Engineering University of Notre Dame

Agenda Motivation Proposed method Experiments Conclusions

Agenda Motivation Proposed method Experiments Conclusions

class 1 class 2 class 3 : : : LEARNING TESTING class description classifier’s design LEARNING TESTING description classification class query image

class 3 class 2 class 1 : LEARNING TESTING class description classification class classifier’s design query image

Are all parts of the face important? Important for gender. This problem gives rise to some interesting questions This problem raises some interesting questions Some interesting questions arise out of this problem Important for race. Important for expression.

Are all parts of the face important? This problem gives rise to some interesting questions This problem raises some interesting questions Some interesting questions arise out of this problem Important for Mary.

Are all parts of the face important? This problem gives rise to some interesting questions This problem raises some interesting questions Some interesting questions arise out of this problem Important for Mary and Miguel. Not important at all!!! Important for Miguel.

Agenda Motivation Proposed method Experiments Conclusions

SRC Sparse Representation Classification Our method is based on... SRC Sparse Representation Classification

Gallery Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . .

Gallery = Dictionary Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . .

. . . Gallery = Dictionary Sparse Representation Query Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . . Sparse Representation Query

. . . Gallery = Dictionary 0.6 0.3 0.1 Sparse Representation Query Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . . 0.6 0.3 0.1 Sparse Representation Query

. . . Gallery = Dictionary 0.6 0.3 0.1 Sparse Representation Query Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . . 0.6 0.3 0.1 Sparse Representation Query

+ + . . . Gallery 0.6 0.3 0.1 Sparse Representation 0.3 0.6 0.1 Query Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . . 0.6 0.3 0.1 Sparse Representation + + 0.3 0.6 0.1 Query

Gallery Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . . The query image is represented as a linear combination of few images of the gallery. + + 0.3 0.6 0.1 Query

+ + 0.3 0.6 0.1 Query

+ + 0.3 0.6 0.1 Query - Not similar: reconstruction error is high

+ + 0.3 0.6 0.1 Query - Very similar: reconstruction error is very low

Query Query is classified as this subject In SRC, the query is classified as the subject with the lowest reconstruction error

ALSR Accumulative Local Sparse Representation [ PROPOSED METHOD ]

Our approach uses Patches! description selection classification ID query image

. . . . . . class i class 1 class k : : : LEARNING TESTING class . . . . . . : : : description description description classifier’s design LEARNING TESTING description classification class query image

. . . . . . class i class 1 class k : : : LEARNING TESTING . . . . . . : : : description description description dictionary 1 dictionary i dictionary k LEARNING TESTING Sparse representations have been widely used in many computer vision problems such as face recognition. We build a dictionary from the gallery images. We reconstruct the query image using a sparse combination of the dictionary. We recognize the class by searching the minimal reconstruction error. description sparse representation classification class query image

. . . . . . class i class 1 class k : : : LEARNING TESTING class . . . . . . : : : description description description dictionary 1 dictionary i dictionary k LEARNING TESTING description SRC class query image

. . . . . . class 1 class i class k : : : : : : LEARNING TESTING class . . . . . . : : : : : : description description description dictionary 1 dictionary i dictionary k LEARNING TESTING description SRC class query image

. . . . . . class 1 class i class k : : : : : : LEARNING TESTING class . . . . . . : : : : : : description description description dictionary 1 dictionary i dictionary k LEARNING TESTING description SRC class query image

. . . . . . for each test patch : for all test patches class 1 class i class k . . . . . . : : : : : : description description description dictionary 1 dictionary i dictionary k LEARNING TESTING Huge dictionary Sparse representation could be very time consuming for each test patch for all test patches majority vote : class description SRC query image

. . . . . . for each test patch : for all test patches class 1 class i class k . . . . . . : : : : : : description description description dictionary 1 dictionary i dictionary k LEARNING TESTING for each test patch for all test patches majority vote : class description selection of best dictionaries SRC query image

. . . . . . for each test patch : for all test patches class 1 class i class k . . . . . . : : : : : : description description description dictionary 1 dictionary i dictionary k LEARNING TESTING for each test patch for all test patches majority vote : class description selection of best dictionaries SRC score query image

Visual Vocabulary & Stop List description : class k class i class 1 . . . . . . Visual Vocabulary & Stop List dictionary 1 dictionary i dictionary k LEARNING TESTING for each test patch for all test patches majority vote : class description selection of best dictionaries SRC face mask score query image

. . . . . . Gallery For each image of the gallery: Dictionary Subject-1 Subject-2 Subject-3 Subject-4 Subject-k . . . For each image of the gallery: Dictionary . . . Patches of Subject-1 Patches of Subject-2 Patches of Subject-k + position (x,y) of each patch

. . . Gallery Patches of Class-1 Class-2 Class-k General dictionary D Class-1 Class-2 Class-3 Class-4 Class-k Gallery Patches of Class-1 Class-2 Class-k General dictionary D

Dictionary Query

i For patch i 0. Original dictionary 1. Selection of the nearest patches (using (x,y) information) 2. Selection of the most similar patches (using intensity information) 3. Sparse Representation 0.5 0.3 0.2 Contribution si 4. 0.8 ... 0.2 i For patch i

i D Dn Ds For patch i 0. Original dictionary 1. Selection of the nearest patches (using (x,y) information) Dn 2. Selection of the most similar patches (using intensity information) Ds 3. Sparse Representation of yi using Ds 0.5 0.3 0.2 xi Contribution i 4. si 0.8 ... 0.2 i Patch yi Neighborhood of yi For patch i

Subject Contribution of each patch: 1 2 3 N 0.1 0.5 0.2 0.3 0.4 0.6 : 0.7 z 1.1 14.1 0.7 1.5 TOTAL Query classified as #2

Query Patches Contribution p 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 0.2 0.1 0.3 0.4 0.6 0.7 0.5 1 12 Query

x x x x Patches that are not discriminative can be removed Patches Mask Contribution p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 0.2 0.1 0.3 0.4 0.6 0.7 0.5 Patches that are not discriminative can be removed x x Face masks used in our experiments x x Query

Patches that are not discriminative can be removed Patches Mask Contribution p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.3 0.2 0.4 0.6 0.5 Patches that are not discriminative can be removed Query

SCI: Sparsity Concentration Index (score) Patches Mask Contribution SCI p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.3 0.2 0.4 0.6 0.5 SCI - 0.2 0.3 0.4 0.8 0.5 SCI: Sparsity Concentration Index (score) Query

SCI: Sparsity Concentration Index > 0.25 Patches Mask Contribution SCI p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.3 0.2 0.4 0.6 0.5 SCI - 0.2 0.3 0.4 0.8 0.5 SCI: Sparsity Concentration Index > 0.25 Query

SCI: Sparsity Concentration Index > 0.25 Patches Mask Contribution SCI p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.4 0.6 0.2 0.3 0.5 SCI - 0.2 0.3 0.4 0.8 0.5 SCI: Sparsity Concentration Index > 0.25 Query

SCI: Sparsity Concentration Index > 0.25 Patches Mask Contribution SCI p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.4 0.6 0.2 0.3 0.5 SCI - 0.3 0.4 0.8 0.5 SCI: Sparsity Concentration Index > 0.25 Query

Maximal value of each contribution Patches Mask Contribution SCI max p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.4 0.6 0.2 0.3 0.5 SCI - 0.3 0.4 0.8 0.5 max - 0.4 0.6 0.3 0.5 Maximal value of each contribution Query

Each contribution is divided by its maximum Patches Mask Contribution SCI max Normalization p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.4 0.6 0.2 0.3 0.5 SCI - 0.3 0.4 0.8 0.5 max - 0.4 0.6 0.3 0.5 1 2 3 4 - 0.25 1.0 0.17 0.33 0.2 0.4 Each contribution is divided by its maximum Query

The normalized contributions must be greater than 0.2 Patches Mask Contribution SCI max Normalization p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.4 0.6 0.2 0.3 0.5 SCI - 0.3 0.4 0.8 0.5 max - 0.4 0.6 0.3 0.5 1 2 3 4 - 0.25 1.0 0.17 0.33 0.2 0.4 The normalized contributions must be greater than 0.2 Query

The normalized contributions must be greater than 0.2 Patches Mask Contribution SCI max Normalization p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.4 0.6 0.2 0.3 0.5 SCI - 0.3 0.4 0.8 0.5 max - 0.4 0.6 0.3 0.5 1 2 3 4 - 0.25 1.0 0.33 0.4 The normalized contributions must be greater than 0.2 Query

The query is classified according the maximal normalized contribution Patches Mask Contribution SCI max Normalization p 1 2 3 4 5 6 7 8 9 10 11 12 q 1 1 2 3 4 - 0.1 0.4 0.6 0.2 0.3 0.5 SCI - 0.3 0.4 0.8 0.5 max - 0.4 0.6 0.3 0.5 1 2 3 4 - 0.25 1.0 0.33 0.4 0.25 3.33 1.4 0.33 max The query is classified according the maximal normalized contribution Query

The code of the MATLAB implementation is available on our webpage: http://dmery.ing.puc.cl > Material > ALSR

Agenda Motivation Proposed method Experiments Conclusions

Experiments Face Recognition in LFW Gender Recognition in AR Expression Recognition in Oulu-CASIA

Face Recognition in LFW [ PROTOCOL ] The gallery has 143 subjects with at least 11 images per subject (10 for training, the rest for testing). There are 1430 images for training and 2744 for testing.

Example in LFW Contributions per class Maximum for #117 1 ... 117 143 1 ... 117 143 Images of the same subject in the gallery (subject #117). Query image Contributions per class Maximum for #117 1 ... 143

Example in LFW Images of the same subject in the gallery (subject #98). 1 ... 143

80.8 In this table, we do not report deep learning methods that require millions of training images (for the sake of truth, VGG-Face in this experiment achieves 97.7%)

Gender Recognition in AR [ PROTOCOL ] 100 subjects (50 women and 50 men). For gender recognition, 14 non-occluded images per subject. In this experiment, the first 25 males and 25 females were used for training and the last 25 males and 25 females were used for testing.

Expression Recognition in Oulu-CASIA [ PROTOCOL ] Six different facial expressions (surprise, happiness, sadness, anger, fear and dis- gust) under normal illumination from 80 subjects (59 males and 21 females) ranging from 23 to 58 years in age. The dataset contains 480 sequences: the first 9 images of each sequence are not considered, the first 40 individuals are taken as training subset and the rest as testing.

Agenda Motivation Proposed method Experiments Conclusions

We presented a new algorithm that is able to recognize faces and facial attributes automatically from face images captured under less constrained conditions including some variability in ambient lighting, pose, expression, size of the face and distance from the camera. The robustness of our algorithm is due to two reasons: The dictionary used in the recognition corresponds to a rich collection of representations of relevant parts which were selected using closeness and similarity criteria. The testing stage is based on accumulative sparse contributions according to location and relevance criteria. We believe that this new approach can be used to solve other kind of computer vision problems in which there are similar unconstrained conditions and a huge number of training images is not available. In the future, we will train our own deep learning network to obtain a better description of the patches, and we will learn the face image masks from training data, instead of manual selection.

Recognition of Faces and Facial Attributes using Accumulative Local Sparse Representations Domingo Mery Department of Computer Science Universidad Católica de Chile Sandipan Banerjee Department of Computer Science & Engineering University of Notre Dame