Face Detection & Recognition

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

Active Appearance Models
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Face Recognition. Introduction Why we are interested in face recognition? Why we are interested in face recognition? Passport control at terminals in.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
ECE738 Advanced Image Processing Face Recognition by Elastic Bunch Graph Matching IEEE Trans. PAMI, July 1997.
Face Recognition By Sunny Tang.
Face Description with Local Binary Patterns:
Face Alignment with Part-Based Modeling
Face Recognition Method of OpenCV
Automatic Feature Extraction for Multi-view 3D Face Recognition
An Introduction to Face Detection and Recognition
Amir Hosein Omidvarnia Spring 2007 Principles of 3D Face Recognition.
Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25,
Face detection Many slides adapted from P. Viola.
A Colour Face Image Database for Benchmarking of Automatic Face Detection Algorithms Prag Sharma, Richard B. Reilly UCD DSP Research Group This work is.
Face Recognition & Biometric Systems, 2005/2006 Face recognition process.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
A Study of Approaches for Object Recognition
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
A Robust Real Time Face Detection. Outline  AdaBoost – Learning Algorithm  Face Detection in real life  Using AdaBoost for Face Detection  Improvements.
Face Recognition Based on 3D Shape Estimation
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
A Robust Real Time Face Detection. Outline  AdaBoost – Learning Algorithm  Face Detection in real life  Using AdaBoost for Face Detection  Improvements.
EECE 279: Real-Time Systems Design Vanderbilt University Ames Brown & Jason Cherry MATCH! Real-Time Facial Recognition.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Face Recognition: An Introduction
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Oral Defense by Sunny Tang 15 Aug 2003
Understanding Faces Computational Photography
Face Recognition CPSC 601 Biometric Course.
FACE DETECTION AND RECOGNITION By: Paranjith Singh Lohiya Ravi Babu Lavu.
Facial Recognition CSE 391 Kris Lord.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
Introduction to Computer Vision Olac Fuentes Computer Science Department University of Texas at El Paso El Paso, TX, U.S.A.
1 ECE 738 Paper presentation Paper: Active Appearance Models Author: T.F.Cootes, G.J. Edwards and C.J.Taylor Student: Zhaozheng Yin Instructor: Dr. Yuhen.
Access Control Via Face Recognition Progress Review.
Terrorists Team members: Ágnes Bartha György Kovács Imre Hajagos Wojciech Zyla.
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
Face Recognition: An Introduction
21 June 2009Robust Feature Matching in 2.3μs1 Simon Taylor Edward Rosten Tom Drummond University of Cambridge.
The Viola/Jones Face Detector A “paradigmatic” method for real-time object detection Training is slow, but detection is very fast Key ideas Integral images.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Multimodal Interaction Dr. Mike Spann
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Face Detection Using Skin Color and Gabor Wavelet Representation Information and Communication Theory Group Faculty of Information Technology and System.
Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
Face Detection Using Neural Network By Kamaljeet Verma ( ) Akshay Ukey ( )
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人:蒲薇榄.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
By: Suvigya Tripathi (09BEC094) Ankit V. Gupta (09BEC106) Guided By: Prof. Bhupendra Fataniya Dept. of Electronics and Communication Engineering, Nirma.
FACE RECOGNITION. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a.
Visual homing using PCA-SIFT
Distinctive Image Features from Scale-Invariant Keypoints
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Recognition: Face Recognition
Object Recognition in the Dynamic Link Architecture
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Brief Review of Recognition + Context
Announcements Project 2 artifacts Project 3 due Thursday night
Presentation transcript:

Face Detection & Recognition Dr. Gökhan Şengül

Topics Why face recognition? What is difficult about real-time face recognition? In general how is face recognition done? Face detection… Eigenfaces Other face recognition algorithms Future of face recognition

Application areas Security Personal information access Fight terrorism Find fugitives Personal information access ATM Sporting events Home access (no keys or passwords) Any other application that would want personal identification Improved human-machine interaction Personalized advertising

System Requirements Want the system to be inexpensive enough to use at many locations Match within seconds Before the person walks away from the advertisement Before the fugitive has a chance to run away Ability to handle a large database Ability to do recognition in varying environments

Advantages of face recognition Face images can be acquired from a long range Many databases are available Acceptable by people People are willing to share their face images in public domain, e.g. facebook

What Is Difficult About Face Recognition? Lighting variation Orientation variation (face angle) Size variation Large database Processor intensive Time requirements

Intra-class variations

FERET DATABASE Contains images of 1,196 individuals, with up to 5 different images captured for each individual Often used to test face recognition algorithms Information on obtaining the database can be found here: http://www.itl.nist.gov/iad/humanid/feret/

Facial Features Level1 details Consists of gross facial characteristics that are easily observable .i.e: general geometry of the face and global skin color Can be used to discriminate elongated faces, faces exhibiting male and female characteristics, faces from different races

Facial Features Level2 details Consists of localized face information such as the structure of the face components (e.g. eyes) , and the relationship between facial components, and the precise shape of the face Essential for accurate face recognition They require higher resolution face images (30 to 75 IPD)

Facial Features Level3 details Consists of unstructured, micro level features on the face Scars, freckles, skin distortion, and moles Can be used for the discrimination of identical twins

Facial Features

General Face Recognition Steps Face Detection Face Normalization Face Identification

General Face Recognition Process

Face detection and recognition “Sally”

Applications of Face Recognition Digital photography Surveillance Album organization

Applications of Face Recognition Digital photography

Applications of Face Recognition Digital photography Surveillance

Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50.

Sensors

Sensors Face images captured in the visible and near-infrared spectra at different wavelengths.

Sensors

What is Face Detection? Given an image, tell whether there is any human face, if there is, where is it(or where they are).

What is Face Detection?

Importance of Face Detection The first step for any automatic face recognition system system First step in many Human Computer Interaction systems Expression Recognition Cognitive State/Emotional State Recognition First step in many surveillance systems Tracking: Face is a highly non rigid object A step towards Automatic Target Recognition(ATR) or generic object detection/recognition Video coding……

Face Detection: current state State-of-the-art: Front-view face detection can be done at >15 frames per second on 320x240 black-and-white images on a 700MHz PC with ~95% accuracy. Detection of faces is faster than detection of edges! Side view face detection remains to be difficult.

Face Detection: challenges Out-of-Plane Rotation: frontal, 45 degree, profile, upside down Presence of beard, mustache, glasses etc Facial Expressions Occlusions by long hair, hand In-Plane Rotation Image conditions: Size Lighting condition Distortion Noise Compression

Viola Jones Face Detector It scans through the input image with detection windows of different sizes Decides whether each window contains a face or not

Viola Jones Face Detector The existence of a face candidate is decided by applying a classifier to simple local features derived using rectangular filters The feature values are obtained by computing the difference between the sum of the pixel intensities in the light and dark rectangular regions

Viola Jones Face Detector

Viola Jones Face Detector

Viola Jones Face Detector

Feature Extraction and Matching Knowledge-based methods: Encode what constitutes a typical face, e.g., the relationship between facial features Texture-based methods The models are learned from a set of training images that capture the representative variability of faces. Model based methods attempt to build 2D or 3D face models that facilitate matching of face images in the presence of pose variations

Feature Extraction and Matching Appearance-based methods The generate a compact representation of the entire face region in the acquired image by mapping the high-dimensional face image into a lower dimensional sub-space. This sub-space is defined by a set of representative basis vectors, which are learned using a training set of images… PCA (Principal Component Analysis) LDA (Linear Discriminant Analysis) ICA (Independent Component Analysis)

Feature Extraction and Matching Model based methods attempt to build 2D or 3D face models that facilitate matching of face images in the presence of pose variations 2D Face models Face Bunch Graphs (FBG) Active Appearance Model (AAM) 3D Face models

Feature Extraction and Matching Texture-based methods: try to find robust local features that are invariant to pose or lighting variations. Examples of such features include gradient orientations Local Binary Patterns (LBP).

Feature Extraction and Matching

Appearance-based face recognition are based on the idea of representing the given face image as a function of different face images available in the training set, or as a function of a few basis faces. the pixel value at location (x,y) in a face image can be expressed as a weighted sum of pixel values in all the training images at (x,y). the goal in linear subspace analysis is to find a small set of most representative basis faces. Any new face image can be represented as a weighted sum of the basis faces and two face images can be matched by directly comparing their vector of weights.

Principal Component Analysis (PCA)

Principal Component Analysis (PCA)

Principal Component Analysis (PCA)

Linear Discriminant Analysis (LDA) Explicitly uses the class label of the training data and conducts subspace analysis with the objective of minimizing intra-class variations and maximizing inter-class variations can generally be expected to provide more accurate face recognition when sufficient face image samples for each user are available during training

Linear Discriminant Analysis (LDA)

Linear Discriminant Analysis (LDA)

Linear Discriminant Analysis (LDA)

Feature Extraction and Matching

Model based face recognition It tries to derive a pose-independent representation of the face images that can enable matching of face image across different poses. These schemes typically require the detection of several fiducial or landmark points in the face (e.g., corners of eyes, tip of the nose, corners of the mouth, homogeneous regions of the face, and the chin)

Elastic Bunch Graph Matching represents a face as a labeled image graph with each node being a fiducial or landmark point on the face. While each node of the graph is labeled with a set of Gabor coefficients (also called a jet) that characterizes the local texture information around the landmark point, the edge connecting any two nodes of the graph is labeled based on the distance between the corresponding fiducial points.

Elastic Bunch Graph Matching First stage: the designer has to manually mark the desired fiducial points and define the geometric structure of the image graph for one (or a few) initial image(s). The image graphs for the remaining images in the training set can be obtained semi-automatically

Elastic Bunch Graph Matching second stage: a FBG is obtained from the individual image graphs by combining a representative set of individual graphs in a stack-like structure. A set of jets corresponding to the same fiducial point is called a bunch. For example, an eye bunch may include jets from open, closed, male and female eyes, etc. An edge between two nodes of the FBG is labeled based on the average distance between the corresponding nodes in the training set.

Elastic Bunch Graph Matching Given a FBG, the fiducial points for a new face image are found by maximizing the similarity between a graph fitted to the given image and the FBG of identical pose. This process is known as Elastic Bunch Graph Matching (EBGM) and it consists of the following three steps:

Elastic Bunch Graph Matching

Elastic Bunch Graph Matching

Feature Extraction and Matching

Texture-Based Face Recognition Based on the analysis of local textures Two different approach: SIFT (Scale-Invariant Feature Transform) LBP (Local Binary Pattern)

Scale Invariant Feature Transform Computation of SIFT features consists of two stages: (a) key point extraction, and (b) descriptor calculation in a local neighborhood at each key point. The descriptor is usually a histogram of gradient orientations within a local neighborhood

Scale Invariant Feature Transform

Local Binary Pattern (LBP) LBP features are usually obtained from image pixels of a 3×3 neighborhood region MLBP Multiscale LBP

Local Binary Pattern (LBP)

Local Binary Pattern (LBP)

Local Binary Pattern (LBP) After LBP encoding of each pixel, the face image is divided into several smaller windows and the histogram of local binary patterns in each window is computed. The number of bins in the histogram is 8 and 2P for the basic LBP and MLBP, respectively. A global feature vector is then generated by concatenating histograms of all the individual windows and normalizing the final vector. Finally, two face images can be matched by computing the similarity (or distance) between their feature vectors.

Face Databases